id
stringlengths 5
11
| text
stringlengths 0
146k
| title
stringclasses 1
value |
|---|---|---|
doc_1400
|
import sklearn hex_train='0x504F1728378126389BACDDDDDFF12873788912893788265722F75706C6F61642F7068702F75706C6F61642E7068703F547970653D4D6564696120485454502FAABBD10D0A436F6E74656E742D547970653A206D756C7469706172742F666F726D2D646174613B20436861727365743D5554462D383B20626F756E646172793D5430504861636B5465616D5F5745424675636B0AD0557365722D4167656E743A205765624675636B205430504861636B5465616D207777772E7430702E78797A200D0A526566657265723A206874747012334BDBFABFBDBF123FBDFBE74656E742D4C656E6774683A203234370D0A4163636570743A202A2F2A0D0A486F73743A206F6365616E2E6B697374691636B5465616D5F5745424675636B0D0A436F6E74656E742D446973706F736974696F6E3A20666F726D2D646174613B206E616D653D224E657746696C65223B2066696C656E616D653D224C75616E2E747874220D0A436F6E74656E742D547970653A20696D6165672F6A7065670D0A0D0A3C3F7068700D0A40707265675F7265706C61636528222F5B706167656572726F725D2F65222C245F504F53545B27446176F756E6427293B0D0A3F3E0D0A2D2D5430504861636B5465616D5F5745424675636B2D2D0D0A'
All the training data I have are these values. I don't know how to preprocess these values and use them as training data. What I know is that should I convert these values into float types?
A: Here is a function to convert hex to decimal !
import binascii
def convert_hex_to_dec(string):
try:
return int(string, 16)
except ValueError:
return int(binascii.hexlify(string.encode('utf-8')), 16)
except TypeError:
return int(hex(0,), 16)
| |
doc_1401
|
The only error I am given is A/libc: Fatal signal 11 (SIGSEGV), code 1 (SEGV_MAPERR), fault addr 0x8 in tid 9408 (GLThread 27055), pid 9296 (fluidSimulation)
I started to do some digging in the gl_code.cpp file where I pass values to exposed C++ methods and I believe I have identified the issue, but I am stuck on a fix...
There is the following C++ method that is called each time the Activity is created:
//This is global as it is used in other methods
void *fluid;
extern "C" JNIEXPORT void JNICALL
Java_com_android_ui_fluidSimulation_FluidLib_init(JNIEnv *env, jobject obj, jint width,
jint height) {
fluid = fluidCreate((int) width, (int) height);
}
Now, I have figured out that the fluid pointer is being held in memory after the activity is exited when onBackPressed() is called. I have worked out that this method needs to be called each time the activity starts, but I need to understand how to reset the pointer.
I have tried delete fluid; before calling fluidCreate, but this doesn't work. Is there a work around for this problem or am I at the mercy of the library and better off abandoning it?
UPDATE 1
So, every frame the below method is called and I have noticed the crash happens when the fluid is added to the frame.
static double now_ms(void) {
struct timespec res;
clock_gettime(CLOCK_REALTIME, &res);
return 1000.0 * res.tv_sec + (double) res.tv_nsec / 1e6;
}
extern "C" JNIEXPORT void JNICALL
Java_com_android_ui_fluidSimulation_FluidLib_step(JNIEnv *env, jobject obj) {
double t = now_ms();
//Below is printed
__android_log_write(ANDROID_LOG_ERROR, "Tag", "Gets before fluidOnFrame");
fluidOnFrame(fluid, t);
//Below is is not printed
__android_log_write(ANDROID_LOG_ERROR, "Tag", "Gets after fluidOnFrame");
}
These methods are called in Native Android as follows:
private class Renderer : GLSurfaceView.Renderer {
override fun onDrawFrame(gl: GL10) {
FluidLib.step()
}
override fun onSurfaceChanged(gl: GL10, width: Int, height: Int) {
FluidLib.init(width, height)
}
override fun onSurfaceCreated(gl: GL10, config: EGLConfig) {
// Do nothing.
}
}
The methods used are defined as follows:
object FluidLib {
/**
* @param width the current view width
* @param height the current view height
*/
external fun init(width: Int, height: Int)
external fun destroy()
external fun step()
init {
System.loadLibrary("gl_code")
}
}
My file to interact with the C++ library:
/**
* C-Style API to enable the fluid component to be used with C-based clients (i.e. Swift)
*/
#ifndef Fluid_h
#define Fluid_h
enum PointerType {
MOUSE = 0,
TOUCH = 1,
PEN = 2
};
#ifdef __cplusplus
extern "C" {
#endif
void* fluidCreate(int width, int height);
void fluidOnFrame(void* fluidPtr, double frameTime_ms);
#ifdef __cplusplus
}
#endif
#endif /* Fluid_h */
UPDATE 2
To try to understand the cause of the problem better, I ended up creating an if statement around my fluidCreate method to see if it was the pointer or some other issue as follows:
void *fluid;
void *newFluid;
extern "C" JNIEXPORT void JNICALL
Java_com_android_ui_fluidSimulation_FluidLib_init(JNIEnv *env, jobject obj, jint width,
jint height) {
if(fluid == 0) {
fluid = fluidCreate((int) width, (int) height);
}else{
newFluid = fluidCreate((int) width, (int) height);
}
}
Then did the following on my method that's called each frame:
extern "C" JNIEXPORT void JNICALL
Java_com_android_ui_fluidSimulation_FluidLib_step(JNIEnv *env, jobject obj) {
double t = now_ms();
if(newFluid == 0) {
__android_log_write(ANDROID_LOG_ERROR, "Tag", "Original Fluid on Frame");
fluidOnFrame(fluid, t);
}else{
__android_log_write(ANDROID_LOG_ERROR, "Tag", "New Fluid on Frame");
fluidOnFrame(newFluid, t);
}
}
Now, even though different pointers are used, the crash is still caused, which makes me think that there seems to be a bug in the C++ library fluidOnFrame method...
A: I think you are in the right direction. The crash log looks like a memory exception to me as well and I think the memory is not properly freed.
The fluid pointer needs to be destructed while you are exiting your activity. I would like to recommend writing another function as follows.
extern "C" JNIEXPORT void JNICALL
Java_com_android_ui_fluidSimulation_FluidLib_destroy(JNIEnv *env) {
delete fluid;
}
And call this function in your Activity's onPause or onDestroy function.
I hope that will destroy the memory reference and will avoid the Fatal signal 11 (SIGSEGV) crash issue that you are having.
| |
doc_1402
|
$_POST['image'] is a URL of an image. $new_name is supposed to be added while running the switch.
$image = $_POST['image'];
$ch = curl_init($image);
curl_setopt($ch, CURLOPT_HEADER, 0);
$imagedata = curl_exec($ch);
list($width, $height, $type) = getimagesize($filename);
switch ($type) {
case 1: ....... //Give the filename a .gif extension and so on for every filetype
}
$fp = fopen('../images/' . $new_name, 'wb');
curl_setopt($ch, CURLOPT_FILE, $fp);
curl_close($ch);
fclose($fp);
!FOUND ANSWER! (sorry if I'm answering my own question wrong (in means of simply "editing" the question), comment on how to do it correctly, if I'm wrong ;)
I simply need to run the cURL function 2 times, but with different names. The first one simply gets me the info about the file, and the second one saves it wit the $new_name
$image = $_POST['image'];
$ch = curl_init($image); curl_setopt($ch, CURLOPT_HEADER, 0);
$imagedata = curl_exec($ch); $info = curl_getinfo($ch,
CURLINFO_CONTENT_TYPE); curl_close($ch);
if($info == "image/jpeg") //Running a code that will give the image an
appropriate extension...
$cf = curl_init($image); $fp = fopen('../images/' . "image.jpg",
'wb'); curl_setopt($cf, CURLOPT_FILE, $fp); curl_exec($cf);
curl_close($cf); fclose($fp);
| |
doc_1403
|
Let's say I have an Event, with Registrants, and they can pay for the event using one or more payments.
I'm trying to create a payment linked to a registrant (who is linked to an event).
So my payment should have both registrant_id and event_id.
My URL looks something like this: (nested routes)
http://mysite.com/events/1/registrants/1/payments/new
My controller looks something like:
def create
@event = Event.find(params[:event_id])
@registrant = Registrant.find(:first, conditions: {id: params[:registrant_id], event_id: params[:event_id]} )
@payment = Payment.new params[:payment]
end
I know there is a much better way to do it, but I'm having trouble with the wording to properly google it :)
What syntax should I be using to make the .new automatically aware of the event_id and registrant_id?
A: It's not great practice to set id attributes directly, as the id might not refer to an actual database row. The normal thing to do here would be to use CanCan (https://github.com/ryanb/cancan), which seems like it would solve all your problems.
EDIT:
If you're not using authentication of any kind then I'd either put the load methods in before_filters to keep things clean:
before_filter :load_event
def load_event
@event = Event.find params[:event_id]
end
or define some funky generic loader (unnecessarily meta and complex and not recommended):
_load_resource :event
def self._load_resource resource_type
before_filter do
resource = resource_type.constantize.find params[:"#{ resource_type }_id]
instance_variable_set :"@#{ resource_type }", resource
end
end
A: Based on the discussion in the comments, there are several ways that the question can be addressed: the direct way and the Rails way.
The direct approach to creating objects that are related is to create the object using new_object = ClassName.new as suggested in the question. Then take the id of the created object and set that on an existing object (directly with existing_object.id = new_object.id or through some other method if additional logic is required). Or set the id on a new object by defining a custom initializer, such as:
class Payment
def initializer id_of_registrant
@registrant_id = id_of_registrant
end
...
end
The advantage of this approach is that it allows you to assign registrant IDs that may come from a range of objects with different classes, without having to deal with unnecessary or perhaps incorrect (for your solution) inheritance and polymorphism.
The Rails way, if you always have a direct relationship (1 to 1) between a Registrant and a 'mandatory' Payment is to use a has_many or belongs_to association, as described in the Rails guide: http://guides.rubyonrails.org/association_basics.html
For the example classes from the question:
class Registrant < ActiveRecord::Base
has_one :payment
end
class Payment < ActiveRecord::Base
belongs_to :registrant
end
You will want to use the appropriate migration to create the database tables and foreign keys that go with this. For example:
class CreateRegistrants < ActiveRecord::Migration
def change
create_table :registrants do |t|
t.string :name
t.timestamps
end
create_table :payments do |t|
t.integer :registrant_id
t.string :account_number
t.timestamps
end
end
end
Of course, if you registrants only optionally make a payment, or make multiple payments, then you will need to look at using the has_many association.
With the has and belongs associations, you can then do nice things like:
@payment.registrant = @registrant
if you have instantiated the objects by hand, or
@payment.new(payment_amount)
@registrant = @payment.build_registrant(:registrant_number => 123,
:registrant_name => "John Doe")
if you would like the associations populated automatically.
The Rails Guide has plenty of examples, though in my experience only trying the most appropriate one for your actual use case will show if there are restrictions that could not be anticipated. The Rails approach will make future queries and object building much easier, but if you have a very loose relationship model for your objects you may find it becomes restrictive or unnatural and the equivalent associations are better coded by hand with your additional business rules.
| |
doc_1404
|
My purpose is to index uploaded images data. Some information comes from a db, for example the owner of the image or the path where the file is stored. Other information, mostly metadata, are extracted from the image file. My idea is to set a dataInputHandler that extracts data from db, and a Transformer that extracts metadata and puts it in the solr document.
Can someone explain me how custom transformers works?
Thanks in advance for the help
A: The class will have to be available on the class path visible to Solr, usually through a .jar file available in one of the configured lib directories (or you can add your own to your solr configuration file (solrconfig.xml - any paths will be relative to this file in a <lib> entry). See How to Load Plugins for more information about how Solr finds your plugin code.
Your transformer will then receive the whole row from the DB, where you can read the file path and add new columns to the row returned to the DIH importer to be included in your document.
| |
doc_1405
|
In file included from threads.cpp:1:
In file included from /Library/Developer/CommandLineTools/usr/bin/../include/c++/v1/iostream:37:
In file included from /Library/Developer/CommandLineTools/usr/bin/../include/c++/v1/ios:215:
/Library/Developer/CommandLineTools/usr/bin/../include/c++/v1/__locale:401:32: error: use of undeclared identifier '_ISspace'
static const mask space = _ISspace;
^
Library/Developer/CommandLineTools/usr/bin/../include/c++/v1/__locale:402:32: error: use of undeclared identifier '_ISprint'
static const mask print = _ISprint;
^
/Library/Developer/CommandLineTools/usr/bin/../include/c++/v1/__locale:403:32: error: use of undeclared identifier '_IScntrl'
static const mask cntrl = _IScntrl;
^
/Library/Developer/CommandLineTools/usr/bin/../include/c++/v1/__locale:404:32: error: use of undeclared identifier '_ISupper'
static const mask upper = _ISupper;
^
/Library/Developer/CommandLineTools/usr/bin/../include/c++/v1/__locale:405:32: error: use of undeclared identifier '_ISlower'
static const mask lower = _ISlower;
^
/Library/Developer/CommandLineTools/usr/bin/../include/c++/v1/__locale:406:32: error: use of undeclared identifier '_ISalpha'
static const mask alpha = _ISalpha;
can you please help me to resolve this __locale issue while compiling the c++ code.
#include <iostream>
#include <thread>
using namespace std;
void fun(void)
{
cout << "Vaule " << 10 << endl;
}
int main()
{
thread t1(fun);
thread t2(fun);
return 0;
}
compiling command:
g++ -std=c++11 -Wall -g thread.cpp -o thread.out
A: Two things fix the problem you're having.
Thing the first, add the compiler option -pthread. My compile command: clang++ -Wall -Wextra -std=c++11 -pthread main.cpp
Thing the second, join your threads before ending the program.
t1.join();
t2.join();
Something it does not fix, is that your std::cout statement will likely be jumbled because the threads simply dump their data into the single stream as they please. For an example, my output was the following:
Vaule Vaule 10
10
In order to fix that, you'll likely need to place a lock(mutex) around the std::cout statement.
As I said in my comment, I do not recommend using g++ unless you installed it yourself. The command you're using is an alias that doesn't behave, because of some text you left out.
❯ g++ --version
Configured with: --prefix=/Applications/Xcode.app/Contents/Developer/usr --with-gxx-include-dir=/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include/c++/4.2.1
Apple clang version 12.0.0 (clang-1200.0.32.29)
Target: x86_64-apple-darwin20.3.0
Thread model: posix
InstalledDir: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin
Contrast with clang++
❯ clang++ --version
Apple clang version 12.0.0 (clang-1200.0.32.29)
Target: x86_64-apple-darwin20.3.0
Thread model: posix
InstalledDir: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin
Of note in the g++ section is the 'Configured with' line. It uses a Standard Library from gcc 4.2.1, which is pre-C++11. You should not have left that information out.
| |
doc_1406
|
I thought that SoX or similar can do the trick for me, but SoX only works with files and other similar libraries that supports adding sound effects seems to add the effects only when the sound is played. So my problem with this is that I want to apply the effect to the samples into my buffer without playing them.
I have never worked with audio, but reading about PCM data I have learned that I can apply gain multiplying each sample value, for example. But I'm looking for any library or relatively easy algorithms that I can use directly in my buffer to get the sound effects applied.
I'm sure there are a lot of solutions to my problem out there if you know what to look for, but it's my first time with audio "processing" and I'm lost, as you can see.
A: For everyone like me, interested in learning DSP related to audio processing with C++ I want to share my little research results and opinion, and perhaps save you some time :)
After trying several DSP libraries, finally I have found The Synthesis ToolKit in C++ (STK), an open-source library that offer easy and clear interfaces and easy to understand code that you can dive in to learn about various basic DSP algorithms.
So, I recommend to anyone who is starting out and have no previous experience to take a look at this library.
A: Your int16_t[] buffer contains a sequence of samples. They represent instantaneous amplitude levels. Think of them as the voltage to apply to the speaker at the corresponding instant in time. They are signed numbers with values in the range (-32767,32767]. A stream of constant zeros means silence. A stream of constant -32000 (for example) also means silence, but it will eventually burn your your speaker coil. The position in the array represents time, and the value of each sample represents voltage.
If you want to mix two sample streams together, for example to apply a chirp, you get yourself a sample stream with the chirp in it (record a bird or something). You then add the two sounds sample by sample.
You can do a super-cheesy reverb effect by taking your original sound buffer, lowering its volume (perhaps by dividing all the samples by a constant), and adding it back to your original stream, but shifting the samples by a tenth of a second's worth of array position.
Those are the basics of audio processing. Things get very sophisticated indeed. This field is known as "digital signal processing" and there are plenty of books on the subject.
A: You can do it either with hacking the audio buffer and trying to do some effects like gain and threshold with simple math operations or do it correct using proper DSP algorithms. If you wish to do it correct, I would recommend using the Speex Library. It's open source and and well tested. www (dot)speex (dot)org. The code should compile on MSVC or linux with minimal effort. This is the fastest way to get a good audio code working with proper DSP techniques. Your code would look like .. please read the AEC example.
st = speex_echo_state_init(NN, TAIL);
den = speex_preprocess_state_init(NN, sampleRate);
speex_echo_ctl(st, SPEEX_ECHO_SET_SAMPLING_RATE, &sampleRate);
speex_preprocess_ctl(den, SPEEX_PREPROCESS_SET_ECHO_STATE, st);
You need to setup the states, the code testecho includes these.
| |
doc_1407
|
I tried the following in getValueAt(.) but no luck:
if(value instanceof Date)
{
//System.out.println("isDate");
DateFormat formatter = DateFormat.getDateInstance();
SimpleDateFormat f = new SimpleDateFormat("MM/dd/yy");
value = f.format(value);
Date parsed = (Date) value;
try {
parsed = (Date) f.parse(value.toString());
} catch (ParseException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
value = parsed.toString();
}
The println(.) is never printed so it isn't even getting to that. The Format that is being displayed is Apr 10, 1992 but I want 04/10/92
While we are on the topic of Date in JTables... I have isCellEditable(.) as true but I cannot edit the Date cells. How do you do this?
A:
The Format that is being displayed is
Apr 10, 1992
Sounds like a toString() representation of the Date is being stored in the TableModel and not a Date Object. So you need to check how your data is copied from the ResultSet to the TableModel. Make sure you are using the resultSet.getObject() method. Or maybe the problem is that you are storing a String in your database that is formatted the way you see it.
Anyway, once you are able to actually store a Date object in the TableModel, check out Table Format Renderers which allows you to create a custom renderer with a customized date format in a single line of code.
A: You should create a subclass of DefaultTableCellRenderer and override setValue(Object) then set the cell renderer for the whole column.
public class DateCellRenderer extends DefaultTableCellRenderer {
public DateCellRenderer() { super(); }
@Override
public void setValue(Object value) {
SimpleDateFormat sdf = new SimpleDateFormat("MM/dd/yy");
setText((value == null) ? "" : sdf.format(value));
}
}
Then in your user code do something like JTable.getColumnModel().getColumn(index).setCellRenderer(new DateCellRenderer());
A: Do not override getValue, use a TableCellRenderer instead:
TableCellRenderer tableCellRenderer = new DefaultTableCellRenderer() {
SimpleDateFormat f = new SimpleDateFormat("MM/dd/yy");
public Component getTableCellRendererComponent(JTable table,
Object value, boolean isSelected, boolean hasFocus,
int row, int column) {
if( value instanceof Date) {
value = f.format(value);
}
return super.getTableCellRendererComponent(table, value, isSelected,
hasFocus, row, column);
}
};
table.getColumnModel().getColumn(0).setCellRenderer(tableCellRenderer);
A: There is a third party add on available.. moment.js.
Just add it via nuget, move the scripts to your content scripts file. Add this line (example)
Javascript:
<script src="@Url.Content("~/Content/Scripts/moment.min.js")" type="text/javascript"></script>
And then we change our field declaration in jtable.
Javascript:
DateAdded: {
title: 'Date added',
width: '20%',
sorting: false,
display: function (data) {
return moment(data.record.DateAdded).format('DD/MM/YYYY HH:mm:ss');
}
}
| |
doc_1408
|
def foo(): # asasasasasasasasasasasasasasasasasasasasasasasasasasasasasasasasasasasasasasasasasas
pass
even though the line is clearly longer than the PEP8 limit. Is there a setting that I'm missing or does Anaconda not cover this with it's autoformatter? Are there any other PEP8 formatting packages / comment wrapping packages that anyone recommends?
Alternatively, are comments exempt from this 79 character limit by any chance?
| |
doc_1409
|
Ideally I would like to have one file main.py with my main code and one file common.py with all my global variables (declared/initialized[1]) and with a from common import * I would be able to use every variable declared in common.py.
I apologize if it's a duplicate. I did search around but I can't seem to find something. Maybe you could do it with classes(?), but I'm trying to avoid using them for this particular program (but if there's no other way I'll take it)
EDIT: Adding code to show the example of two files not working:
file main.py
from common import *
def func1():
x=2
print('func1', x)
def func2():
print('fuc2',x)
print('a',x)
func1()
func2()
file common.py
x=3
this prints:
a 3
func1 2
fuc2 3
but it should print:
a 3
func1 2
fuc2 2
because func2 even though it's called after func1 (where x is assigned a value) sees x as 3. Meaning that fuc1 didn't actually use the "global" variable x but a local variable x different from the "global" one. Correct me If I'm wrong
[1] I know that in python you don't really declare variables, you just initialize them but you get my point
A: Technically, every module-level variable is global, and you can mess with them from anywhere. A simple example you might not have realized is sys:
import sys
myfile = open('path/to/file.txt', 'w')
sys.stdout = myfile
sys.stdout is a global variable. Many things in various parts of the program - including parts you don't have direct access to - use it, and you'll notice that changing it here will change the behavior of the entire program. If anything, anywhere, uses print(), it will output to your file instead of standard output.
You can co-opt this behavior by simply making a common sourcefile that's accessible to your entire project:
common.py
var1 = 3
var2 = "Hello, World!"
sourcefile1.py
from . import common
print(common.var2)
# "Hello, World!"
common.newVar = [3, 6, 8]
fldr/sourcefile2.py
from .. import common
print(common.var2)
# "Hello, World!"
print(common.newVar)
# [3, 6, 8]
As you can see, you can even assign new properties that weren't there in the first place (common.newVar).
It might be better practice, however, to simply place a dict in common and store your various global values in that - pushing a new key to a dict is an easier-to-maintain operation than adding a new attribute to a module.
If you use this method, you're going to want to be wary of doing from .common import *. This locks you out of ever changing your global variables, because of namespaces - when you assign a new value, you're modifying only your local namespace.
In general, you shouldn't be doing import * for this reason, but this is particular symptom of that.
| |
doc_1410
|
It works when UITextView is not responding by using UIKeyCommands
override var keyCommands: [UIKeyCommand]? {
return [
UIKeyCommand(input: "\r", modifierFlags: [], action: #selector(taskOne)),
UIKeyCommand(input: "\r", modifierFlags: .shift, action: #selector(taskTwo)),
]
}
But when UITextView is being edited, it just adds a new line and doesn't detect these UIKeyCommands.
How can I solve this issue?
A: override press began function
override func pressesBegan(_ presses: Set<UIPress>, with event: UIPressesEvent?) {
guard let key = presses.first?.key else { return }
switch key.keyCode {
case .keyboardReturn:
if key.modifierFlags == .shift {
taskTwo()
} else {
// not the state for plus you want, pass it
taskOne()
}
super.pressesBegan(presses, with: event)
break
default:
super.pressesBegan(presses, with: event)
}
}
and override func textView(_ textView: UITextView, shouldChangeTextIn range: NSRange, replacementText text: String) -> Bool
for UITextViewDelegate
check this answer
| |
doc_1411
|
initially, i thought to separate prices by the $10 value, but i realized that this isnt a good method of grouping because prices can vary greatly because of outliers or unrelated items, etc.
if i have a list of prices like so: [90, 92, 95, 99, 1013, 1100]
my desire is for the application to separate values into:
{nineties:4, thousands:2}
but im just not sure how to tell python to do this. preferably, the simplest i can integrate this snippet into my code, the better!
any help or suggestions would be appreciated!
A: The technique you use depends your notion of what a group is.
If the number of groups is known, use kmeans with k==2. See this link for working code in pure Python:
from kmeans import k_means, assign_data
prices = [90, 92, 95, 99, 1013, 1100]
points = [(x,) for x in prices]
centroids = k_means(points, k=2)
labeled = assign_data(centroids, points)
for centroid, group in labeled.items():
print('Group centered around:', centroid[0])
print([x for (x,) in group])
print()
This outputs:
Group centered around: 94.0
[90, 92, 95, 99]
Group centered around: 1056.5
[1013, 1100]
Alternatively, if a fixed maximum distance between elements defines the groupings, then just sort and loop over the elements, checking the distance between them to see whether a new group has started:
max_gap = 100
prices.sort()
groups = []
last_price = prices[0] - (max_gap + 1)
for price in prices:
if price - last_price > max_gap:
groups.append([])
groups[-1].append(price)
last_price = price
print(groups)
This outputs:
[[90, 92, 95, 99], [1013, 1100]]
A: I think scatter plots are underrated for this sort of thing. I recommend plotting the distribution of prices, then choosing threshold(s) that look right for your data, then adding any descriptive stats by group that you want.
# Reproduce your data
prices = pd.DataFrame(pd.Series([90, 92, 95, 99, 1013, 1100]), columns=['price'])
# Add an arbitrary second column so I have two columns for scatter plot
prices['label'] = 'price'
# jitter=True spreads your data points out horizontally, so you can see
# clearly how much data you have in each group (groups based on vertical space)
sns.stripplot(data=prices, x='label', y='price', jitter=True)
plt.show()
Any number between 200 and 1,000 separates your data nicely. I'll arbitrarily choose 200, maybe you'll choose different threshold(s) with more data.
# Add group labels, Get average by group
prices['price group'] = pd.cut(prices['price'], bins=(0,200,np.inf))
prices['group average'] = prices.groupby('price group')['price'].transform(np.mean)
price label price group group average
0 90 price (0, 200] 94.0
1 92 price (0, 200] 94.0
2 95 price (0, 200] 94.0
3 99 price (0, 200] 94.0
4 1013 price (200, inf] 1056.5
5 1100 price (200, inf] 1056.5
A: Naive approach to point in the right direction:
> from math import log10
> from collections import Counter
> def f(i):
> x = 10**int(log10(i)) # largest from 1, 10, 100, etc. < i
> return i // x * x
> lst = [90, 92, 95, 99, 1013, 1100]
> c = Counter(map(f, lst))
> c
Counter({90: 4, 1000: 2})
A: Assume that your buckets are somewhat arbitrary in size (like between 55 and 95 and between 300 and 366) then you can use a binning approach to classify a value into a bin range. The cut-off for the various bin sizes can be anything you want so long as they are increasing in size left to right.
Assume these bin values:
bins=[0,100,1000,10000]
Then:
[0,100,1000,10000]
^ bin 1 -- 0 <= x < 100
^ bin 2 -- 100 <= x < 1000
^ bin 3 -- 1000 <= x < 10000
You can use numpy digitize to do this:
import numpy as np
bins=np.array([0.0,100,1000,10000])
prices=np.array([90, 92, 95, 99, 1013, 1100])
inds=np.digitize(prices,bins)
You can also do this in pure Python:
bins=[0.0,100,1000,10000]
tests=zip(bins, bins[1:])
prices=[90, 92, 95, 99, 1013, 1100]
inds=[]
for price in prices:
if price <min(bins) or price>max(bins):
idx=-1
else:
for idx, test in enumerate(tests,1):
if test[0]<= price < test[1]:
break
inds.append(idx)
Then classify by bin (from the result of either approach above):
for i, e in enumerate(prices):
print "{} <= {} < {} bin {}".format(bins[inds[i]-1],e,bins[inds[i]],inds[i])
0.0 <= 90 < 100 bin 1
0.0 <= 92 < 100 bin 1
0.0 <= 95 < 100 bin 1
0.0 <= 99 < 100 bin 1
1000 <= 1013 < 10000 bin 3
1000 <= 1100 < 10000 bin 3
Then filter out the values of interest (bin 1) versus the outlier (bin 3)
>>> my_prices=[price for price, bin in zip(prices, inds) if bin==1]
my_prices
[90, 92, 95, 99]
| |
doc_1412
|
PREFIX dbo: <http://dbpedia.org/ontology/>
PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>
SELECT ?person ?birthPlace
WHERE {
?person rdfs:label ?label.
?person rdf:type dbo:Person.
?person <http://dbpedia.org/property/birthPlace>
<http://dbpedia.org/resource/Barcelona>.
}
However, I do not know how to get the birthPlace. I want a variable that says next to each name that Barcelona is the place of birth. Any ideas?
A: How about this:
PREFIX dbo: <http://dbpedia.org/ontology/>
PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>
SELECT ?person ?birthPlace
WHERE {
?person rdfs:label ?label.
?person rdf:type dbo:Person.
?person <http://dbpedia.org/property/birthPlace> ?birthPlace.
FILTER (?birthPlace = <http://dbpedia.org/resource/Barcelona>)
}
Note that your query has a pattern to match labels, but the labels are not returned. That leads to duplicate results because some people have multiple labels (in different languages). Remove the pattern, or add ?label to the SELECT clause.
You can abbreviate <http://dbpedia.org/property/birthPlace> to dbp:birthPlace.
| |
doc_1413
|
with plain-text fields it already works but I'm having a hard time setting RichTextFormat-Text as value, since everytime I use "setRichTextValue", save and open the document, the field is empty (unchanged).
Code is as follows (stripped from multiple functions):
PDDocument pdfDoc = PDDocument.load(new File("my pdf path"));
PDDocumentCatalog docCatalog = pdfDoc.getDocumentCatalog();
PDAcroForm acroForm = docCatalog.getAcroForm();
PDField field = acroForm.getField("field-to-change");
if (field instanceof PDTextField) {
PDTextField tfield = (PDTextField) field;
COSDictionary dict = field.getCOSObject();
//COSString defaultAppearance = (COSString) dict.getDictionaryObject(COSName.DA);
//if (defaultAppearance != null && font != "" && size > 0)
// dict.setString(COSName.DA, "/" + font + " " + size + " Tf 0 g");
boolean rtf = true;
String val = "{\rtf1\ansi\deff0 {\colortbl;\red0\green0\blue0;\red255\green0\blue0;} \cf2 Red RTF Text \cf1 }";
tfield.setRichText(rtf);
if (rtf)
tfield.setRichTextValue(val);
else
tfield.setValue(val);
}
// save document etc.
by digging the PDFBox documentation I found this for .setRichTextValue(String r)
* Set the fields rich text value.
* Setting the rich text value will not generate the appearance
* for the field.
* You can set {@link PDAcroForm#setNeedAppearances(Boolean)} to
* signal a conforming reader to generate the appearance stream.
* Providing null as the value will remove the default style string.
* @param richTextValue a rich text string
so I added
pdfDoc.getDocumentCatalog().getAcroForm().setNeedAppearances(true);
..directly after the PDDocument object and it didnt change anything. So I searched further and found the AppearanceGenerator class, which should create the styles automatically? But it doesnt seem to, and you cant call it manually.
I'm at a loss here and Google is no help either. Seems nobody ever used this before or I'm just too stupid. I want the solution to be done in PDFBox since you dont pay for licenses and it already works for everything else I am doing (getting & replacing images, removing text fields), so it must be possible right?
Thanks in advance.
| |
doc_1414
|
[ERROR] ClientError: An error occurred (403) when calling the HeadObject operation: Forbidden
The only difference between my trial and Cloudformation is that I am generating the bucket name dynamically in the template. Other than that, everything is the same. I am attaching the exact same permissions to the lambda function through CFN which would otherwise work in my testing.
I tried some of the other solutions mentioned where people fixed their system clock issue to get this working.
I am doing everything through the UI and not using CLI.
My cloudformation template is uploaded here:
Can anyone help me solve this mystery. I am having a real tough time digging into this.
A: Your IAMManagedPolicy is incorrect. Statements such as:
arn:aws:s3:::{$S3Bucket}/*
will not resolve to your bucket's name, as you are missing Sub and the syntax is incorrect.
You can try the following (Sub and ${S3Bucket} added):
IAMManagedPolicy:
Type: "AWS::IAM::ManagedPolicy"
Properties:
ManagedPolicyName: "bucket-scan-policy-2"
Path: "/"
PolicyDocument: !Sub |
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"s3:GetObject",
"sns:Publish",
"kms:Decrypt",
"s3:PutObjectVersionTagging",
"s3:GetObjectTagging",
"s3:PutObjectTagging"
],
"Resource": [
"arn:aws:sns:::<av-scan-start>",
"arn:aws:sns:::<av-status>",
"arn:aws:s3:::${S3Bucket}/*"
]
},
{
"Sid": "VisualEditor1",
"Effect": "Allow",
"Action": [
"logs:CreateLogStream",
"logs:CreateLogGroup",
"logs:PutLogEvents"
],
"Resource": "*"
},
{
"Sid": "VisualEditor2",
"Effect": "Allow",
"Action": "s3:*",
"Resource": "arn:aws:s3:::yara-rules/*"
},
{
"Sid": "VisualEditor3",
"Effect": "Allow",
"Action": "s3:*",
"Resource": "arn:aws:s3:::${S3Bucket}/*"
},
{
"Sid": "VisualEditor9",
"Effect": "Allow",
"Action": "s3:*",
"Resource": "arn:aws:s3:::${S3Bucket}"
},
{
"Sid": "VisualEditor4",
"Effect": "Allow",
"Action": "s3:*",
"Resource": "arn:aws:s3:::yara-rules"
}
]
}
| |
doc_1415
|
I'm just talking about the "common cellphone", not smart phone/android stuff.
A: I work for wireless semiconductor chip provider, and we work on variety of phone platforms from ULC (ultra low cost ) segments to Smart phones.
In our Reference phone design, the entire code (including Protocol stack, Kernel, Middleware, Application and MMI) is written purely in C. AFAIK even first tier customers use C language for their framework, atleast for ULC and Mid category phones, as the memory size tends to be a big requirement.
A: Phones running a variety of the Symbian OS will very likely have all core OS functionality written in C++, as that is the "native" language of Symbian.
A: When talking about cellphones, there are usually two processor components in it.
*
*The "main" processor that covers the user interface.
*The "baseband" processor that powers the cellular modem. It handles the low-level radio interface, switching towers, etc.
The code for #1 tends to be higher-level (C, C++, Java, etc). The language used really depends on the OS that it is running (Windows Mobile, Symbian, Linux, something home-grown, etc). Of course, there is almost always SOME low-level assembly for the boot loader.
The code for #2 is pretty low-level. Baseband Processors tend to be little more than microcontrollers. Mostly assembly language and C. Very unlikely to find anything higher level here. (Although I have seen a few cell modems with a Python interpreter built-in.)
Usually the Baseband Processor is running some kind of minimal RTOS, or in some cases OS-less. They are very often running an RTOS called Nucleus from Mentor Graphics.
On some low-cost cell phones, #1 and #2 are joined together to cut costs (only one processor & OS in the system).
A: Hardware things, like setting registers and handling interrupts to run the radio, are all done in C.
Two problems with C++ are, in my opinion, that
*
*It is harder to design efficient programs in it. The CPU may only be a few hundred MHz.
*The compilers for more exotic CPUs barely work in C, so running them in C++ would be a miracle.
A: Phones running Android will use mostly C under the java machine, and the Java in the top layers.
But if you look on most phones they are just like the rest of the embedded market,
it is a lot of c and in some project some c++.
And the smaller they are, the more c you will find.
/Johan
A: Most Cellphones have different layers of Software, Largely we can divide this into three parts.
1.Application Layer : anything like BREW,C++ or Android
2.Middle-layer : Consists of real time OS code : C code[mostly as I have seen]
3.Lower-Layer : Device Drivers : Written in C.
please note :Most Common Cellphones are likely to use C++ as an application layer ,Brew is largely used by CDMA phones for application layer
A: Nokia bought Trolltech, the makers of Qt - a cross-platform application and UI framework for desktop and mobile applications. Presumably this includes cell phones. Qt is written in C++. http://www.qtsoftware.com/developer/getting-started
| |
doc_1416
|
curl: (77) error setting certificate verify locations:
CAfile: /etc/ssl/certs/ca-certificates.crt
CApath: none
How do I set this certificate verify locations?
A: Another alternative to fix this problem is to disable the certificate validation:
echo insecure >> ~/.curlrc
A: For PHP code running on XAMPP on Windows I found I needed to edit php.ini to include the below
[curl]
; A default value for the CURLOPT_CAINFO option. This is required to be an
; absolute path.
curl.cainfo = curl-ca-bundle.crt
and then copy to a file https://curl.haxx.se/ca/cacert.pem and rename to curl-ca-bundle.crt and place it under \xampp path (I couldn't get curl.capath to work). I also found the CAbundle on the cURL site wasn't enough for the remote site I was connecting to, so used one that is listed with a pre-compiled Windows version of curl 7.47.1 at http://winampplugins.co.uk/curl/
A: I had the exact same problem. As it turns out, my /etc/ssl/certs/ca-certificates.crt file was malformed. The last entry showed something like this:
-----BEGIN CERTIFICATE-----
MIIEDTCCAvWgAwIBAgIJAN..lots of certificate text....AwIBAgIJAN-----END CERTIFICATE-----
After adding a newline before -----END CERTIFICATE-----, curl was able handle the certificates file.
This was very annoying to find out since my update-ca-certificates command did not give me any warning.
This may or may not be a version specific problem of curl, so here is my version, just for completeness:
curl --version
# curl 7.51.0 (x86_64-alpine-linux-musl) libcurl/7.51.0 OpenSSL/1.0.2j zlib/1.2.8 libssh2/1.7.0
# Protocols: dict file ftp ftps gopher http https imap imaps pop3 pop3s rtsp scp sftp smb smbs smtp smtps telnet tftp
# Features: IPv6 Largefile NTLM NTLM_WB SSL libz TLS-SRP UnixSockets
A: curl performs SSL certificate verification by default, using a "bundle"
of Certificate Authority (CA) public keys (CA certs). The default
bundle is named curl-ca-bundle.crt; you can specify an alternate file
using the --cacert option.
If this HTTPS server uses a certificate signed by a CA represented in
the bundle, the certificate verification probably failed due to a
problem with the certificate (it might be expired, or the name might
not match the domain name in the URL).
If you'd like to turn off curl's verification of the certificate, use
the -k (or --insecure) option.
for example
curl --insecure http://........
A: This worked for me
sudo apt-get install ca-certificates
then go into the certificates folder at
sudo cd /etc/ssl/certs
then you copy the ca-certificates.crt file into the /etc/pki/tls/certs
sudo cp ca-certificates.crt /etc/pki/tls/certs
if there is no tls/certs folder: create one and change permissions using chmod 777 -R folderNAME
A: Create a file ~/.curlrc with the following content
cacert=/etc/ssl/certs/ca-certificates.crt
as follows
echo "cacert=/etc/ssl/certs/ca-certificates.crt" >> ~/.curlrc
A: It seems your curl points to a non-existing file with CA certs or similar.
For the primary reference on CA certs with curl, see: https://curl.haxx.se/docs/sslcerts.html
A: Just create the folders, which is missing in your system..
/etc/pki/tls/certs/
and create the file using the following command,
sudo apt-get install ca-certificates
and then copy and paste the certificate to the destination folder, which is showing in your error.. mine was " with message 'error setting certificate verify locations: CAfile: /etc/pki/tls/certs/ca-bundle.crt CApath: none' in " make sure you paste the file to the exact location mentioned in the error. Use the following command to copy paste..
sudo cp /etc/ssl/certs/ca-certificates.crt
/etc/pki/tls/certs/ca-bundle.crt
Fixed.
A: I've got the same problem : I'm building a alpine based docker image, and when I want to curl to a website of my organisation, this error appears. To solve it, I have to get the CA cert of my company, then, I have to add it to the CA certs of my image.
Get the CA certificate
Use OpenSSL to get the certificates related to the website :
openssl s_client -showcerts -servername my.company.website.org -connect my.company.website.org:443
This will output something like :
CONNECTED(00000005)
depth=2 CN = UbisoftRootCA
verify error:num=19:self signed certificate in certificate chain
...
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
...
Get the last certificate (the content between the -----BEGIN CERTIFICATE----- and the
-----END CERTIFICATE----- markups included) and save it into a file (mycompanyRootCA.crt for example)
Build your image
Then, when you'll build your docker image from alpine, do the following :
FROM alpine
RUN apk add ca-certificates curl
COPY mycompanyRootCA.crt /usr/local/share/ca-certificates/mycompanyRootCA.crt
RUN update-ca-certificates
Your image will now work properly ! \o/
A: I came across this curl 77 problem while was trying to access elasticsearch running in docker container on Ubuntu 20.04 localhost. Afrer container was started:
*
*Check curl without ssl: curl --cacert http_ca.crt -u elastic https://localhost:9200 -k lowercase -k for insecure connection.
*Check curl configs: curl-config --configure, noticed what is ca-bundle: --with-ca-bundle=/etc/ssl/certs/ca-certificates.crt.
*Copy http_ca.crt file from container to:/usr/local/share/ca-certificates/, original command is here.
*Run update on ca-certificates: sudo update-ca-certificates.
*Run curl: curl -u elastic:<password> https://localhost:9201.
Finally got response with "tagline" : "You Know, for Search".
Change <password> to the one that was generated when Docker Image was run.
Also notice that on my machine elastic was started on port 9201 (don't know why: sudo ss -tlpn | grep 9200 gives me nothing), I have found the port with: sudo netstat -ntlp and Programm name was docker-proxy.
A: The quickest way to get around the error is add on the -k option somewhere in your curl request. That option "allows connections to SSL cites without certs." (from curl --help)
Be aware that this may mean that you're not talking to the endpoint you think you are, as they are presenting a certificate not signed by a CA you trust.
For example:
$ curl -o /usr/bin/apt-cyg https://raw.github.com/cfg/apt-cyg/master/apt-cyg
gave me the following error response:
curl: (77) error setting certificate verify locations:
CAfile: /usr/ssl/certs/ca-bundle.crt
CApath: none
I added on -k:
curl -o /usr/bin/apt-cyg https://raw.github.com/cfg/apt-cyg/master/apt-cyg -k
and no error message. As a bonus, now I have apt-cyg installed. And ca-certificates.
A: I also had the newest version of ca-certificates installed but was still getting the error:
curl: (77) error setting certificate verify locations:
CAfile: /etc/pki/tls/certs/ca-bundle.crt
CApath: none
The issue was that curl expected the certificate to be at the path /etc/pki/tls/certs/ca-bundle.crt but could not find it because it was at the path /etc/ssl/certs/ca-certificates.crt.
Copying my certificate to the expected destination by running
sudo cp /etc/ssl/certs/ca-certificates.crt /etc/pki/tls/certs/ca-bundle.crt
worked for me. You will need to create folders for the target destination if they do not exist by running
sudo mkdir -p /etc/pki/tls/certs
If needed, modify the above command to make the destination file name match the path expected by curl, i.e. replace /etc/pki/tls/certs/ca-bundle.crt with the path following "CAfile:" in your error message.
A: From $ man curl:
--cert-type <type>
(SSL) Tells curl what certificate type the provided certificate
is in. PEM, DER and ENG are recognized types. If not specified,
PEM is assumed.
If this option is used several times, the last one will be used.
--cacert <CA certificate>
(SSL) Tells curl to use the specified certificate file to verify
the peer. The file may contain multiple CA certificates. The
certificate(s) must be in PEM format. Normally curl is built to
use a default file for this, so this option is typically used to
alter that default file.
A: @roens is correct. This affects all Anaconda users, with below error
curl: (77) error setting certificate verify locations:
CAfile: /etc/pki/tls/certs/ca-bundle.crt
CApath: none
The workaround is to use the default system curl and avoid messing with the prepended Anaconda PATH variable. You can either
*
*Rename the Anaconda curl binary :)
mv /path/to/anaconda/bin/curl /path/to/anaconda/bin/curl_anaconda
*OR remove Anaconda curl
conda remove curl
$ which curl
/usr/bin/curl
[0] Anaconda Ubuntu curl Github issue https://github.com/conda/conda-recipes/issues/352
A: This error is related to a missing package: ca-certificates. Install it.
In Ubuntu Linux (and similar distro):
# apt-get install ca-certificates
In CygWin via Apt-Cyg
# apt-cyg install ca-certificates
In Arch Linux (Raspberry Pi)
# pacman -S ca-certificates
The documentation tells:
This package includes PEM files of CA certificates to allow SSL-based applications to check for the authenticity of SSL connections.
As seen at: Debian -- Details of package ca-certificates in squeeze
A: Put this into your .bashrc
# fix CURL certificates path
export CURL_CA_BUNDLE=/etc/ssl/certs/ca-certificates.crt
(see comment from Robert)
A: If anyone is still having trouble, try this, it worked for me.
Delete the files in your /etc/ssl/certs/ directory
then reinstall ca-certificates:
sudo apt install ca-certificates --reinstall
Did this when I tried installing Linuxbrew.
A: For what it's worth, checking which curl is being run is significant too.
A user on a shared machine I maintain had been getting this error. But the cause turned out to be because they'd installed Anaconda (http://continuum.io). Doing so put Anaconda's binary path before the standard $PATH, and it comes with its own curl binary, which had trouble finding the default certs that were installed on this Ubuntu machine.
A: Just find this solution works perfectly for me.
echo 'cacert=/etc/ssl/certs/ca-certificates.crt' > ~/.curlrc
I found this solution from here
A: For windows :-
*
*Download the certificate from https://curl.se/docs/caextract.html
*Rename cacert.pem to curl-ca-bundle.crt
*Add the file to any of the below locations
Check this for details https://curl.se/docs/sslcerts.html
A: Run following command in git bash that works fine for me
git config --global http.sslverify "false"
A: I had this problem as well. My issue was this file:
/usr/ssl/certs/ca-bundle.crt
is by default just an empty file. So even if it exists, you'll still get the error as it doesn't contain any certificates. You can generate them like this:
p11-kit extract --overwrite --format pem-bundle /usr/ssl/certs/ca-bundle.crt
https://github.com/msys2/MSYS2-packages/blob/master/ca-certificates/ca-certificates.install
A: I use MobaXterm which intern uses Cygwin so even after installing ca-certificates using apt-cyg install ca-certificates problem didn't resolve.
I was still getting the following error:
curl: (77) error setting certificate verify locations: CAfile: /etc/ssl/certs/ca-certificates.crt CApath: none
Then I tried listing the file /etc/ssl/certs/ca-certificates.crt and I couldn't find it. However I could find /usr/ssl/certs/ca-bundle.crt with all standard CA certificates so I copied the file /usr/ssl/certs/ca-bundle.crt as /etc/ssl/certs/ca-certificates.crt and problem got resolved.
A: In my case, it was a permission issue
try
sudo curl .....
| |
doc_1417
|
int main()
{
Stack<char> a;
char * expression;
expression=new char[30];
char * temp;
temp=new char[30];
int result=0;
bool flag=true;
cout<<"Enter an Infix Expression:";
cin>>expression;
//some more code//
.....
}
Is there a way to whenever the special character ! is pressed, store it in the char variable expression and exit the program? or what can I do to exit the program whenever that key is pressed?
A: You would have to implement your own "read stuff from stdin" function that would watch for ! and act accordingly (perhaps by calling exit() directly, or throwing an exception). I'd provide more info, but your question doesn't ask for specifics. Also, how are you parsing expressions? I ask beause ! happens to be the factorial operator, which you users might enter.
A: Sorry, I'm new here, I don't know if I can write code in comment, so I just write a new answer
while(true)
{
cout<<"Enter an Infix Expression:";
cin>>expression;
if(string(expression).at(0) == '!')
break;
... //do your things here
}
exit(0);
//try this, and tell me if it works. I think this is better than the last one
A: Have you tried this?
if(strcmp(expression, "!") == 0)
exit(0);
| |
doc_1418
|
def sitemap
last_model = MyModel.active.last
if stale?(etag: last_model, last_modified: last_model.created_at.utc)
@my_models = MyModel.active
respond_to do |format|
format.xml {render layout: false}
end
end
end
Its routed in routes.rb: match '/sitemap.xml' => 'dashboard#sitemap', defaults: {format: :xml}. I use xml builder as a view template.
I have a strange issue - when I start passenger standalone 3 (compiled with nginx) in production env, I get normal responses with full xml. But after some time I start to get only a part of xml (first 65Kb or less often 16Kb).
I tried to comment stale? condition and even then I have this issue.
What could be the possible fixes? Thanks
A: Solved with starting passenger as a daemon process with:
$ passenger start -p 3000 -e production -d
| |
doc_1419
|
library(tidyverse)
library(fs)
file_path <- fs::dir_ls("User/Low Carbon London/daily_dataset")
df <- file_path%>%
map(function(path){
read_csv(path)
})
Each file is named as block_^, where ^ is an integer number. Each file within the df has columns look like this:
id mean max
a 1 2
d 2 4
f 3 6
I then read another file which contains information about the files, which is info. The data frame of info is shown as below:
id stdorToU Acorn_grouped file
a std Affluent block_1
b std Comfortable block_2
c ToU Adversity block_3
d ToU Adversity block_4
. . . .
. . . .
. . . .
n1 n2 n3 block_^
I filtered the id. Then I created a dataframe to make it as an index for matching the corresponding id in the files within the large list, where col 1 represents id from the info data frame.
info <- filter(info, stdorToU == "Std", Acorn_grouped == "Affluent")
i <- as.data.frame(info[,c(1)])
I am now stuck because I don't know how to convert such a large list into a data frame to match the id in i (or vice versa). Or is there a more efficient way of extracting the rows of id from the large list which contains a lot of files?
A: This creates a list, where each element is a dataframe of matched rows from the corresponding input dataframe.
matched_rows_per_df <- lapply(df, function(x) x[x$id %in% i$id,])
If you want to then combine all these dataframes into a single dataframe you can do.
combined_matches <- as.data.frame(do.call(rbind, matched_rows_per_df))
| |
doc_1420
|
I'm currently doing this:
$data = "raw image data";
$type = "image/jpeg";
$response = $this->getResponse();
$response->setHeader('Content-Type', $type, true);
$response->setHeader('Content-Length', strlen($data), true);
$response->setHeader('Content-Transfer-Encoding', 'binary', true);
$response->setHeader('Cache-Control', 'max-age=3600, must-revalidate', true);
$response->setBody($data);
$response->sendResponse();
exit;
A: All you are missing is $response->sendResponse(); at the end before exit; and you're good.
| |
doc_1421
|
common_project
|---folder
| |--- file1.py
| |--- file2.py
| |--- file3.py
project
|---src
|----|---main_file.py
I need to access the functions of file1.py and file2.py in main_file.py. How can I zip the files of common folder so that I can import as
from common.file1 import func1
I tried to zip the folder as zip -r common_files.zip common_project/folder/ . But the file is zipping as :
adding: common_project/folder/file1.py (stored 0%)
adding: common_project/folder/file2.py
But I need to import as from common import file1.py
Any help would be appreciatable.
A: If you want to be able to do imports as indicated, the zip file must be structured accordingly:
common
|--- file1.py
|--- file2.py
|--- file3.py
I don't know if you can achieve this with zip directly. I used a symbolic link as a workaround:
$ ln -s common_project/folder common
$ zip -r common_files.zip common
adding: common/ (stored 0%)
adding: common/file1.py (deflated 10%)
adding: common/file3.py (deflated 10%)
adding: common/file2.py (deflated 10%)
$ rm common
$ unzip -l common_files.zip
Archive: common_files.zip
Length Date Time Name
--------- ---------- ----- ----
0 2021-02-22 13:46 common/
41 2021-02-22 13:50 common/file1.py
41 2021-02-22 13:50 common/file3.py
41 2021-02-22 13:50 common/file2.py
--------- -------
123 4 files
One more thing you need to do is add the zip file to Python's import search path:
$ PYTHONPATH=/absolute/path/to/common_files.zip python
and then the imports should work:
>>> from common.file1 import func1
>>> func1()
This is func1
>>> from common import file2
>>> file2.func2()
This is func2
| |
doc_1422
|
Remote working directory is /
New local directory is D:\PP
local:PP65205861.PDF => remote:/PP65205861.PDF
Listing directory /
-rw------- 1 200 100 12414 June 03 20:05 PP65205861.PDF
New local directory is D:\PE
local:PE65205861.PDF => remote:/PE65205861.PDF
Listing directory /
-rw------- 1 200 100 6763 June 03 20:05 PE65205861.PDF
New local directory is D:\TEMP
Listing directory /
*.PDF: nothing matched
Source1 file value/s to look in lookup file
PE65205861.PDF
Source2 file value/s to look in lookup file
PP65205861.PDF
Current log output
local:PP65205861.PDF => remote:/PP65205861.PDF
-rw------- 1 200 100 12414 June 03 20:05 PP65205861.PDF
local:PE65205861.PDF => remote:/PE65205861.PDF
-rw------- 1 200 100 6763 June 03 20:05 PE65205861.PDF
Desired log output
Plus setting a value to a variable to be used in other validation.
PE65205861.PDF
PP65205861.PDF
| |
doc_1423
|
var markers = [
{
"title": '1. Welgemeend',
"lat": '-33.805556',
"stopover": true,
"lng": '18.869722',
"description": '1. Welgemeend'
},
{
"title": '2. Ruitersvlei',
"lat": '-33.783294',
"lng": '18.935900',
"stopover": true,
"description": '2. Ruitersvlei'
}
];
However, when I add a third marker:
var markers = [
{
"title": '1. Welgemeend',
"lat": '-33.805556',
"stopover": true,
"lng": '18.869722',
"description": '1. Welgemeend'
},
{
"title": '2. Ruitersvlei',
"lat": '-33.783294',
"lng": '18.935900',
"stopover": true,
"description": '2. Ruitersvlei'
}
,
{
"title": '3. Spice Route',
"lat": '-33.760815',
"lng": '18.916757',
"stopover": true,
"description": '3. Spice Route'
},
];
The drawing of the lines go crazy. Here is my gmaps code:
<div id="property-map"></div>
<script type="text/javascript">
jQuery(function($) {
var mapOptions = {
center: new google.maps.LatLng(markers[0].lat, markers[0].lng),
zoom: 10,
mapTypeId: google.maps.MapTypeId.ROADMAP
};
var map = new google.maps.Map(document.getElementById("property-map"), mapOptions);
var infoWindow = new google.maps.InfoWindow();
var lat_lng = new Array();
var latlngbounds = new google.maps.LatLngBounds();
for (i = 0; i < markers.length; i++) {
var data = markers[i]
var myLatlng = new google.maps.LatLng(data.lat, data.lng);
lat_lng.push(myLatlng);
var marker = new google.maps.Marker({
position: myLatlng,
map: map,
title: data.title
});
latlngbounds.extend(marker.position);
(function (marker, data) {
google.maps.event.addListener(marker, "click", function (e) {
infoWindow.setContent(data.description);
infoWindow.open(map, marker);
});
})(marker, data);
}
map.setCenter(latlngbounds.getCenter());
map.fitBounds(latlngbounds);
//***********ROUTING****************//
//Intialize the Path Array
var path = new google.maps.MVCArray();
//Intialize the Direction Service
var service = new google.maps.DirectionsService();
//Set the Path Stroke Color
var poly = new google.maps.Polyline({ map: map, strokeColor: '#4986E7' });
//Loop and Draw Path Route between the Points on MAP
for (var i = 0; i < lat_lng.length - 1; i++) {
var src = lat_lng[i];
var des = lat_lng[i + 1];
path.push(src);
poly.setPath(path);
service.route({
origin: src,
destination: des,
travelMode: google.maps.DirectionsTravelMode.DRIVING
}, function (result, status) {
if (status == google.maps.DirectionsStatus.OK) {
for (var i = 0, len = result.routes[0].overview_path.length; i < len; i++) {
path.push(result.routes[0].overview_path[i]);
}
}
});
}
});
</script>
What could be causing this?
A: Line 51: remove the line
path.push(src);
The first time you push polylines to this array; a few lines further you push route segments.
This gives you two different sets of lines superimposed.
You are using confusing variable names; this causes problems. "path" is probably not a good name to store an array of data (well, not for what you are doing).
A few other details: for(var i ...) -> You should put the var on the first for(), not on the second for().
poly.setPath(path); should not be inside a loop.
Here is how I would have done this.
(I added a 4th random point. Remove it if you want)
<head>
<style>
html, body, #property-map {
height: 400px;
margin: 0px;
padding: 0px
}
#content {
width: 200px;
overflow: hidden;
}
</style>
<script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.11.1/jquery.min.js"></script>
<script type="text/javascript" src="https://maps.googleapis.com/maps/api/js?"></script>
<script>
var markers = [{
"title": '1. Welgemeend',
"lat": '-33.805556',
"lng": '18.869722',
"stopover": true,
"description": '1. Welgemeend'
},
{
"title": '2. Ruitersvlei',
"lat": '-33.783294',
"lng": '18.935900',
"stopover": true,
"description": '2. Ruitersvlei'
}
,
{
"title": '3. Spice Route',
"lat": '-33.760815',
"lng": '18.916757',
"stopover": true,
"description": '3. Spice Route'
},
{
"title": '4. Some random point',
"lat": '-33.75',
"lng": '18.90',
"stopover": true,
"description": '4. Some random point'
}];
jQuery(function($) {
var mapOptions = {
center: new google.maps.LatLng(markers[0].lat, markers[0].lng),
zoom: 10,
mapTypeId: google.maps.MapTypeId.ROADMAP
};
var map = new google.maps.Map(document.getElementById("property-map"), mapOptions);
var infoWindow = new google.maps.InfoWindow();
var routeObjects = [];
var markerObjects = [];
var directionService = new google.maps.DirectionsService();
var latlngbounds = new google.maps.LatLngBounds();
// loop through the markers.
// Create the marker, then send a request for a route between this marker and the next (except for the last iteration).
for (var i=0; i<markers.length; i++) {
// create the marker
var marker = new google.maps.Marker({
position: new google.maps.LatLng(markers[i].lat, markers[i].lng),
title: markers[i].title,
map: map
});
markerObjects.push(marker);
latlngbounds.extend(marker.position);
// click event: show an infowindow with the description
google.maps.event.addListener(marker, "click", function (e) {
var i = markerObjects.indexOf(this);
infoWindow.setContent('<div id="content">' + markers[i].description + '</div>');
infoWindow.setPosition(this.getPosition());
infoWindow.open(map, this);
});
// send a route request, except on the last iteration
if (i < markers.length - 1) {
directionService.route({
origin: new google.maps.LatLng(markers[i].lat, markers[i].lng),
destination: new google.maps.LatLng(markers[i + 1].lat, markers[i + 1].lng),
travelMode: google.maps.DirectionsTravelMode.DRIVING
}, function (result, status) {
if (status == google.maps.DirectionsStatus.OK) {
// We will draw this part of the route on the map; not worry about the other requests
var path = new google.maps.MVCArray();
for (var j = 0; j < result.routes[0].overview_path.length; j++) {
path.push(result.routes[0].overview_path[j]);
}
var poly = new google.maps.Polyline({
path: path,
map: map,
strokeColor: '#4986E7'
});
routeObjects.push(path); // I don't really us this in this script, but it might be useful
}
});
}
else { // last iteration. This might be a good place to do the last-minute things
map.fitBounds(latlngbounds);
}
}
});
</script>
</head>
<body>
<div id="property-map"></div>
</body>
| |
doc_1424
|
#include <iostream>
using namespace std;
int main()
{
int n;
do
{
cout << "Enter a non-negative integer: ";
cin >> n;
if (n < 0)
{
cout << "The integer you entered is negative. " << endl;
}
}
while (n < 0);
return 0;
}
The terminal requires the user to reenter the number until it is positive for the above code that I have written. However, I try to convert the do while loop to while loop as shown below, there is no output at all.
May I know which part I have written wrongly? Thank you.
#include <iostream>
using namespace std;
int main()
{
int n;
while (n < 0)
{
cout << "Enter a non-negative integer: ";
cin >> n;
if (n < 0){
cout << "The integer you entered is negative. " << endl;
}
}
return 0;
}
A: In the while version of your code, you do not know what the value of n is, as it is not initialized before you use it in the while (n<0) loop.
int main()
{
int n;
// HERE YOU DO NOT KNOW IF n is NEGATIVE OR POSITIVE, YOU EITHER NEED TO INITIALIZE IT OR
// ENTER IT by cin >> n
while (n < 0)
{
cout << "Enter a non-negative integer: ";
cin >> n;
if (n < 0){
cout << "The integer you entered is negative. " << endl;
}
}
return 0;
}
You then need to rearrange a little bit to get the same ouput as well as to use the first input of n which is out of the loop.
For instance, this will provide the same output:
#include <iostream>
using namespace std;
int main()
{
int n=-1;
cout << "Enter a non-negative integer: ";
cin >> n;
while (n < 0)
{
if (n < 0)
{
cout << "The integer you entered is negative. " << endl;
cout << "Enter a non-negative integer: ";
cin >> n;
}
else
{
// if positive you get out of the loop
break;
}
}
return 0;
}
A: You have to take the n before while(n<0) statement:
#include <iostream>
using namespace std;
int main()
{
int n;
cout << "Enter a non-negative integer: ";
cin >> n;
while (n < 0)
{
cout << "Enter a non-negative integer: ";
cin >> n;
if (n < 0)
cout << "The integer you entered is "
<< "negative. " << endl;
}
return 0;
}
Or you may initialize it with negative number:
#include <iostream>
using namespace std;
int main()
{
int n=-1;
while (n < 0)
{
cout << "Enter a non-negative integer: ";
cin >> n;
if (n < 0)
cout << "The integer you entered is "
<< "negative. " << endl;
}
return 0;
}
| |
doc_1425
|
Even i select "device" as the launch target when clicked the run button VS launches the app on the emulator not the phone. Surprisingly if i right click the project on solution explorer and select debug it this time launched app on the device.
Is there anybody faced with this kind of problem? I dont want to install everthing from scratch :)
A: Ok it was my fault, since the project build platform was x86 visual studio could not deploy the app to the device and hence launching the emulator. When i changed the build platform to Any CPU now it deploys my app to the device without any problem.
| |
doc_1426
|
I've looked for examples of someone using
- (void)selectItemAtIndexPath:(NSIndexPath *)indexPath animated:(BOOL)animated scrollPosition:(UICollectionViewScrollPosition)scrollPosition
But I'm not sure how to use it with my UICollectionView so I can have the right cells selected when the view loads and thus be highlighted. I have multiple selection on.
A: You should call selectItemAtIndexPath: for each cell you want to highlight, like so:
[self.collectionView selectItemAtIndexPath:path animated:NO scrollPosition:UICollectionViewScrollPositionNone]
Note that for one item (and one only!) you probably want to set the animated property to YES and provide a scroll position (one item only because otherwise you're going to be making a lot of needless animation calls).
You'll first need to get the index paths of the cells you want to select. The index path consists of two numbers: the section the cell is in and the row number (or order, if your collection view doesn't have rows) of the cell inside that section.
If you store the index paths of the cells the user has selected in an array then you can just iterate through. Otherwise you'll need to find the index path out, using a UICollectionView method such as indexPathForCell.
| |
doc_1427
|
struct alignas(16) MyStruct {
...
};
It is meant to be used for a template parameter where the template class needs to make sure that the type it is templated on is 16 byte aligned.
A: There is alignof that you can use to make sure the type you get is aligned to the correct size. You would use it in your function like
template <typename T>
void foo(const T& bar)
{
static_assert(alignof(T) == 16, "T must have an alignment of 16");
// rest of function
}
If using this in a class you'd have
template <typename T>
class foo
{
static_assert(alignof(T) == 16, "T must have an alignment of 16");
// rest of class
};
| |
doc_1428
|
private void btnActionPerformed(java.awt.event.ActionEvent evt) {
String username = txtUserName.getText();
String password = txtPassword.getText();
String email = txtEmail.getText();
SimpleDateFormat sdf = new SimpleDateFormat("yyyy-MM-dd");
String birthdate = sdf.format(JDateChooser.getDate());
Users user = new Users();
user.setUserName(cin);
user.setPassWord(firstName);
user.setEmail(email);
user.setBirthDate(birthdate);
try {
int count = Users.getInstance().insert(user);
if(count == 1){
JOptionPane.showMessageDialog(null,"success");
reset();
}else{
JOptionPane.showMessageDialog(null,"Faild");
}
} catch (Exception ex) {
Logger.getLogger(AddNewPatient.class.getName()).log(Level.SEVERE, null, ex);
}
}
I got an error which says String connot be converted to Date in the line "user.setBirthDate(birthdate);"
Because the parameter birthdate is assigned as Date type in the encapsulation(setBirthDate)
is there any way to solve this issue, I am new in java programming and I am trying to improve my skills in java.
A: If this returns a Date:
JDateChooser.getDate()
And what you need is a Date, then don't convert it to a String. Just keep it as a Date:
Date birthdate = JDateChooser.getDate();
// later...
user.setBirthDate(birthdate);
Note that you can then also remove this line, since you're not using the variable it declares:
SimpleDateFormat sdf = new SimpleDateFormat("yyyy-MM-dd");
In general you want to keep data types in their raw form pretty much as often as possible. Unless there's a specific need for something to be represented as a string (displaying it to the user, sending it over a serialized API of some kind, etc.) then just use the data as-is instead of converting it to something else.
A: After you get the date with JDateChooser.getDate(), you are immediately converting it to a string: sdf.format(JDateChooser.getDate());
You should store the returned Date from JDateChooser.getDate() as an actual Date object.
Date birthdate = JDateChooser.getDate();
Then you can use it in your other function directly:
user.setBirthDate(birthdate);
If you do need the date as a string for some other purpose (perhaps display to the user), you can store a formatted string version in a different variable:
String birthdateString = sdf.format(birthdate);
Otherwise, if you don't need a string version, you can delete the line where you create sdf.
| |
doc_1429
|
{
"coreId" : "1",
"name" : "name",
"additionalValueList" : [
{
"columnName" : "allow_duplicate",
"rowId" : "10",
"value" : "1"
},
{
"columnName" : "include_in_display",
"rowId" : "11",
"value" : "0"
},
...e.t.c
]
},
...e.t.c
and Java class
class DTO {
@JsonProperty("coreId")
private Integer id;
private String name;
private Boolean allowDuplicate;
private Boolean includeInDisplay;
}
How I can easily map values from 'additionalValueList' to corresponding java fields.For example Json value from field 'columnName' - 'allow_duplicate' = DTO.allowDuplicate.
Actually I know how to do it with custom deserializers with @JsonDeserialize annotation and smth like this.Bu I have 40+ DTO and it is not a good idea to create own deserializer for each filed. I am looking for solution to have for example 1 deserializer(since values structure in 'additionalValueList' are the same for all entities) and to pass parameter(field name that I want to map to that field) to custom deserializer that will find in 'additionalValueList' entity with 'column Name' = parameter(that I passed from annotation) and return 'value'.
Example
class DTO {
@JsonProperty("coreId")
private Integer id;
private String name;
@JsonDeserialize(using = MyCustDeser.class,param = allow_duplicate)
private Boolean allowDuplicate;
@JsonDeserialize(using = MyCustDeser.class,param = include_in_display)
private Boolean includeInDisplay;
}
It will be a good solution but maybe not easy to achieve.However I will be very grateful for all your advices.Thank you.
A: Create a Converter class, then specify it on the DTO class.
The following code uses public fields for the simplicity of the example.
/**
* Intermediate object used for deserializing FooDto from JSON.
*/
public final class FooJson {
/**
* Converter used when deserializing FooDto from JSON.
*/
public static final class ToDtoConverter extends StdConverter<FooJson, FooDto> {
@Override
public FooDto convert(FooJson json) {
FooDto dto = new FooDto();
dto.name = json.name;
dto.id = json.coreId;
dto.allowDuplicate = lookupBoolean(json, "allow_duplicate");
dto.includeInDisplay = lookupBoolean(json, "include_in_display");
return dto;
}
private static Boolean lookupBoolean(FooJson json, String columnName) {
String value = lookup(json, columnName);
return (value == null ? null : (Boolean) ! value.equals("0"));
}
private static String lookup(FooJson json, String columnName) {
if (json.additionalValueList != null)
for (FooJson.Additional additional : json.additionalValueList)
if (columnName.equals(additional.columnName))
return additional.value;
return null;
}
}
public static final class Additional {
public String columnName;
public String rowId;
public String value;
}
public Integer coreId;
public String name;
public List<Additional> additionalValueList;
}
You now simply annotate the DTO to use it:
@JsonDeserialize(converter = FooJson.ToDtoConverter.class)
public final class FooDto {
public Integer id;
public String name;
public Boolean allowDuplicate;
public Boolean includeInDisplay;
@Override
public String toString() {
return "FooDto[id=" + this.id +
", name=" + this.name +
", allowDuplicate=" + this.allowDuplicate +
", includeInDisplay=" + this.includeInDisplay + "]";
}
}
Test
ObjectMapper mapper = new ObjectMapper();
FooDto foo = mapper.readValue(new File("test.json"), FooDto.class);
System.out.println(foo);
Output
FooDto[id=1, name=name, allowDuplicate=true, includeInDisplay=false]
| |
doc_1430
|
<textarea maxlength="700"></textarea>
https://jsfiddle.net/bpyds1gd/
update
replace the blank line to a new line: preg_replace("/(^[rn]*|[rn]+)[st]*[rn]+/", "\r" , $string)
then replace the new line to a break: echo(nl2br($string)) for html output
A: it is not 704 chars, it is 700 chars if you delete enter chars
| |
doc_1431
|
struct A {
A(int typ): type{typ} {}
const int type;
};
struct B : public A {
B(int typ): A(type) {}
};
int main() {
B b{3};
return 0;
}
Can you see the bug here, how tricky it is ?
Here we build an instance of B with 3 as parameter we expect that the value of type into A is 3, right ? But we have done a typing mistake into B constructor, and we do not pass the content of the received parameter to A but the content of the value already in A::type. See the difference typ vs type in B constructor.
So how can I make g++ to warm me against this ? Because it shouldn't be allowed, A is not already initialized, we shouldn't be able do access A properties.
A: The flag to use is -Wuninitialized, it is already embedded with -Wextra and -Wall.
But in my case, I use gcc-6.4 in c++14 mode.
With this gcc version you have to use the flag, enable optimization and use the variable that have been initialized with an uninitialized variable.
Only if all of these condition have been done, gcc will warn you about used of uninitialized variable.
You can see this here : https://compiler-explorer.com/z/q53sYr - If I remove the -O2 flag or the last condition on b.type, gcc will not warn us.
As the man page say (https://man7.org/linux/man-pages/man1/g++.1.html) :
Note that there may be no warning about a variable that is
used only to compute a value that itself is never used,
because such computations may be deleted by data flow
analysis before the warnings are printed.
| |
doc_1432
|
SELECT col.COLUMN_NAME AS ColumnName
, col.DATA_TYPE AS DataType
, col.CHARACTER_MAXIMUM_LENGTH AS MaxLength
, COLUMNPROPERTY(OBJECT_ID('[' + col.TABLE_SCHEMA + '].[' + col.TABLE_NAME + ']'), col.COLUMN_NAME, 'IsIdentity')AS IS_IDENTITY
, CAST(ISNULL(pk.is_primary_key, 0)AS bit)AS IsPrimaryKey
FROM INFORMATION_SCHEMA.COLUMNS AS col
LEFT JOIN(SELECT SCHEMA_NAME(o.schema_id)AS TABLE_SCHEMA
, o.name AS TABLE_NAME
, c.name AS COLUMN_NAME
, i.is_primary_key
FROM sys.indexes AS i JOIN sys.index_columns AS ic ON i.object_id = ic.object_id
AND i.index_id = ic.index_id
JOIN sys.objects AS o ON i.object_id = o.object_id
LEFT JOIN sys.columns AS c ON ic.object_id = c.object_id
AND c.column_id = ic.column_id
WHERE i.is_primary_key = 1)AS pk ON col.TABLE_NAME = pk.TABLE_NAME
AND col.TABLE_SCHEMA = pk.TABLE_SCHEMA
AND col.COLUMN_NAME = pk.COLUMN_NAME
WHERE col.TABLE_NAME = 'tbl_users'
ORDER BY col.TABLE_NAME, col.ORDINAL_POSITION;
Im using this code for a cursor and when I try to get the value of IS_IDENTITY, its always empty or something. I feel like dynamic sql and cursor don't like the OBJECT_ID function. When I run this query without the cursor and stuff, it works perfectly fine.
FULL CODE:
ALTER Procedure [dbo].[sp_generateUpserts]
@databaseName nvarchar(MAX)
AS
BEGIN
SET NOCOUNT ON
DECLARE @tranState BIT
IF @@TRANCOUNT = 0
BEGIN
SET @tranState = 1
BEGIN TRANSACTION tranState
END
BEGIN TRY
Declare @TABLE_NAME varchar(100)
Declare @COLUMN_NAME varchar(100)
Declare @COLUMN_NAME_WITH_SPACE varchar(100)
Declare @DATA_TYPE varchar(100)
Declare @CHARACTER_MAXIMUM_LENGTH INT
Declare @IS_PK INT
Declare @IS_IDENTITY INT
DECLARE @statement nvarchar(MAX)
SET @statement = N'USE [' + @databaseName + '];
DECLARE cursorUpsert CURSOR FOR
SELECT TABLE_NAME
FROM INFORMATION_SCHEMA.TABLES
ORDER BY TABLE_NAME'
EXECUTE sp_executesql @statement
DECLARE @use_db nvarchar(max)
DECLARE @generateSpStatement varchar(MAX)
DECLARE @spParameters varchar(MAX) = ''
DECLARE @whereColumns varchar(MAX) = ''
DECLARE @updateStatement varchar(MAX) = ''
DECLARE @insertStatement varchar(MAX) = ''
DECLARE @valueStatement varchar(MAX) = ''
OPEN cursorUpsert
FETCH NEXT FROM cursorUpsert
INTO @TABLE_NAME
WHILE @@FETCH_STATUS = 0
BEGIN
DECLARE @statementColumns nvarchar(MAX)
SET @statementColumns = N'USE [' + @databaseName + '];
DECLARE cursorUpsertColumns CURSOR FOR
SELECT col.COLUMN_NAME
, col.DATA_TYPE
, col.CHARACTER_MAXIMUM_LENGTH
, COLUMNPROPERTY(OBJECT_ID(QUOTENAME(col.TABLE_SCHEMA) + ''.'' + QUOTENAME(col.TABLE_NAME)), col.COLUMN_NAME, ''IsIdentity'')AS IS_IDENTITY
, CAST(ISNULL(pk.is_primary_key, 0) AS bit) AS IS_PK
FROM INFORMATION_SCHEMA.COLUMNS AS col
LEFT JOIN(SELECT SCHEMA_NAME(o.schema_id)AS TABLE_SCHEMA
, o.name AS TABLE_NAME
, c.name AS COLUMN_NAME
, i.is_primary_key
FROM sys.indexes AS i JOIN sys.index_columns AS ic ON i.object_id = ic.object_id
AND i.index_id = ic.index_id
JOIN sys.objects AS o ON i.object_id = o.object_id
LEFT JOIN sys.columns AS c ON ic.object_id = c.object_id
AND c.column_id = ic.column_id
WHERE i.is_primary_key = 1)AS pk ON col.TABLE_NAME = pk.TABLE_NAME
AND col.TABLE_SCHEMA = pk.TABLE_SCHEMA
AND col.COLUMN_NAME = pk.COLUMN_NAME
WHERE col.TABLE_NAME = ''' + @TABLE_NAME + '''
ORDER BY col.TABLE_NAME, col.ORDINAL_POSITION;'
EXECUTE sp_executesql @statementColumns
OPEN cursorUpsertColumns
FETCH NEXT FROM cursorUpsertColumns
INTO @COLUMN_NAME, @DATA_TYPE, @CHARACTER_MAXIMUM_LENGTH, @IS_IDENTITY, @IS_PK
WHILE @@FETCH_STATUS = 0
BEGIN
-- Parameters for the SP
IF @COLUMN_NAME LIKE '% %'
BEGIN
SET @COLUMN_NAME_WITH_SPACE = @COLUMN_NAME
SET @COLUMN_NAME_WITH_SPACE = REPLACE(@COLUMN_NAME_WITH_SPACE,' ','_')
SET @spParameters = @spParameters + CHAR(13) + '@' + @COLUMN_NAME_WITH_SPACE + ' ' + @DATA_TYPE
END
ELSE
BEGIN
SET @spParameters = @spParameters + CHAR(13) + '@' + @COLUMN_NAME + ' ' + @DATA_TYPE
END
IF @DATA_TYPE IN ('varchar', 'nvarchar', 'char', 'nchar')
BEGIN
IF @CHARACTER_MAXIMUM_LENGTH = '-1'
BEGIN
SET @spParameters = @spParameters + '(MAX)'
END
ELSE
BEGIN
SET @spParameters = @spParameters + '(' + CAST(@CHARACTER_MAXIMUM_LENGTH As Varchar(10)) + ')'
END
END
-- Add a comma after each parameter
SET @spParameters = @spParameters + ', '
IF @COLUMN_NAME IN ('top')
BEGIN
IF @IS_IDENTITY != 1
BEGIN
print('YES IDENTITY')
END
-- Add where parameters: ColumnName=@ColumnName AND
SET @whereColumns = @whereColumns + CHAR(32) + '[' + @COLUMN_NAME + ']=@' + @COLUMN_NAME + ' AND'
-- Add update parameters: column1 = value1, etc.
IF @IS_IDENTITY != 1 OR @IS_PK != 1
BEGIN
SET @updateStatement = @updateStatement + CHAR(32) + '[' + @COLUMN_NAME + ']=@' + @COLUMN_NAME + ','
END
-- Add insert columns
SET @insertStatement = @insertStatement + CHAR(32) + '[' + @COLUMN_NAME + '],'
-- Add values
SET @valueStatement = @valueStatement + CHAR(32) + '@' + @COLUMN_NAME + ','
END
ELSE IF @COLUMN_NAME LIKE '% %'
BEGIN
IF @IS_IDENTITY != 1
BEGIN
print('YES IDENTITY')
END
-- Add where parameters: ColumnName=@ColumnName AND
SET @whereColumns = @whereColumns + CHAR(32) + '[' + @COLUMN_NAME + ']=@' + @COLUMN_NAME_WITH_SPACE + ' AND'
-- Add update parameters: column1 = value1, etc.
IF @IS_IDENTITY != 1 OR @IS_PK != 1
BEGIN
SET @updateStatement = @updateStatement + CHAR(32) + '[' + @COLUMN_NAME + ']=@' + @COLUMN_NAME_WITH_SPACE + ','
END
-- Add insert columns
SET @insertStatement = @insertStatement + CHAR(32) + '['+ @COLUMN_NAME + '],'
-- Add values
SET @valueStatement = @valueStatement + CHAR(32) + '@' + @COLUMN_NAME_WITH_SPACE + ','
END
ELSE
BEGIN
IF @IS_IDENTITY != 1
BEGIN
print('YES IDENTITY')
END
-- Add where parameters: ColumnName=@ColumnName AND
SET @whereColumns = @whereColumns + CHAR(32) + @COLUMN_NAME + '=@' + @COLUMN_NAME + ' AND'
-- Add update parameters: column1 = value1, etc.
IF @IS_IDENTITY != 1 OR @IS_PK != 1
BEGIN
SET @updateStatement = @updateStatement + CHAR(32) + @COLUMN_NAME + '=@' + @COLUMN_NAME + ','
END
-- Add insert columns
SET @insertStatement = @insertStatement + CHAR(32) + @COLUMN_NAME + ','
-- Add values
SET @valueStatement = @valueStatement + CHAR(32) + '@' + @COLUMN_NAME + ','
END
FETCH NEXT FROM cursorUpsertColumns
INTO @COLUMN_NAME, @DATA_TYPE, @CHARACTER_MAXIMUM_LENGTH, @IS_IDENTITY, @IS_PK
if @@FETCH_STATUS!=0
begin
-- Last row, remove things
-- Remove the last AND word
SET @whereColumns = left (@whereColumns, len(@whereColumns) -3)
-- Remove the last comma from the parameter
SET @spParameters = left (@spParameters, len(@spParameters) -1)
-- Remove the last comma from the updateStatement
SET @updateStatement = left (@updateStatement, len(@updateStatement) -1)
-- Remove the last comma from the insertStatement
SET @insertStatement = left (@insertStatement, len(@insertStatement) -1)
-- Remove the last comma from the valueStatement
SET @valueStatement = left (@valueStatement, len(@valueStatement) -1)
end
END;
CLOSE cursorUpsertColumns;
DEALLOCATE cursorUpsertColumns;
--- End Cursor Columns
-- Generate the SP
SET @generateSpStatement = 'CREATE Procedure [dbo].[sp_' + @TABLE_NAME + '_upsert]' + @spParameters
SET @generateSpStatement = @generateSpStatement + CHAR(13) + 'AS BEGIN' + CHAR(13)
SET @generateSpStatement = @generateSpStatement + CHAR(13)
SET @generateSpStatement = @generateSpStatement + CHAR(9) + 'SET NOCOUNT ON' + CHAR(13)
SET @generateSpStatement = @generateSpStatement + CHAR(9) + 'DECLARE @tranState BIT' + CHAR(13)
SET @generateSpStatement = @generateSpStatement + CHAR(9) + 'IF @@TRANCOUNT = 0' + CHAR(13)
SET @generateSpStatement = @generateSpStatement + CHAR(9) + 'BEGIN' + CHAR(13)
SET @generateSpStatement = @generateSpStatement + CHAR(9) + CHAR(9) +'SET @tranState = 1' + CHAR(13)
SET @generateSpStatement = @generateSpStatement + CHAR(9) + CHAR(9) +'set transaction isolation level serializable' + CHAR(13)
SET @generateSpStatement = @generateSpStatement + CHAR(9) + CHAR(9) +'BEGIN TRANSACTION tranState' + CHAR(13)
SET @generateSpStatement = @generateSpStatement + CHAR(9) + 'END' + CHAR(13)
SET @generateSpStatement = @generateSpStatement + CHAR(13) + 'BEGIN TRY' + CHAR(13)
SET @generateSpStatement = @generateSpStatement + CHAR(9) + 'IF EXISTS(SELECT 1 FROM ' + @TABLE_NAME + ' WITH (updlock) WHERE' + @whereColumns + ')' + CHAR(13)
SET @generateSpStatement = @generateSpStatement + CHAR(9) + CHAR(9) + 'UPDATE ' + @TABLE_NAME + ' SET' + @updateStatement + ' WHERE ' + @whereColumns + ';' + CHAR(13)
SET @generateSpStatement = @generateSpStatement + CHAR(9) + 'ELSE' + CHAR(13)
SET @generateSpStatement = @generateSpStatement + CHAR(9) + CHAR(9) + 'INSERT INTO ' + @TABLE_NAME + ' ('+ @insertStatement + ') VALUES (' + @valueStatement + ');' + CHAR(13)
SET @generateSpStatement = @generateSpStatement + CHAR(13)
SET @generateSpStatement = @generateSpStatement + CHAR(9) + 'IF @tranState = 1 AND XACT_STATE() = 1' + CHAR(13)
SET @generateSpStatement = @generateSpStatement + CHAR(9) + CHAR(9) + 'COMMIT TRANSACTION tranState' + CHAR(13)
SET @generateSpStatement = @generateSpStatement + 'END TRY' + CHAR(13)
SET @generateSpStatement = @generateSpStatement + CHAR(13) + 'BEGIN CATCH' + CHAR(13)
SET @generateSpStatement = @generateSpStatement + CHAR(9) + 'DECLARE @Error_Message VARCHAR(5000)' + CHAR(13)
SET @generateSpStatement = @generateSpStatement + CHAR(9) + 'DECLARE @Error_Severity INT' + CHAR(13)
SET @generateSpStatement = @generateSpStatement + CHAR(9) + 'DECLARE @Error_State INT' + CHAR(13)
SET @generateSpStatement = @generateSpStatement + CHAR(9) + 'SELECT @Error_Message = ERROR_MESSAGE()' + CHAR(13)
SET @generateSpStatement = @generateSpStatement + CHAR(9) + 'SELECT @Error_Severity = ERROR_SEVERITY()' + CHAR(13)
SET @generateSpStatement = @generateSpStatement + CHAR(9) + 'SELECT @Error_State = ERROR_STATE()' + CHAR(13)
SET @generateSpStatement = @generateSpStatement + CHAR(13)
SET @generateSpStatement = @generateSpStatement + CHAR(9) + 'IF @tranState = 1 AND XACT_STATE() <> 0' + CHAR(13)
SET @generateSpStatement = @generateSpStatement + CHAR(9) + CHAR(9) +'ROLLBACK TRANSACTION' + CHAR(13)
SET @generateSpStatement = @generateSpStatement + CHAR(13)
SET @generateSpStatement = @generateSpStatement + CHAR(9) + 'RAISERROR (@Error_Message, @Error_Severity, @Error_State)' + CHAR(13)
SET @generateSpStatement = @generateSpStatement + 'END CATCH' + CHAR(13)
SET @generateSpStatement = @generateSpStatement + CHAR(13)
SET @generateSpStatement = @generateSpStatement + 'END' + CHAR(13)
--print(@generateSpStatement)
-- Reset Variables
SET @generateSpStatement = ''
SET @spParameters = ''
SET @whereColumns = ''
SET @updateStatement = ''
SET @insertStatement = ''
SET @valueStatement = ''
FETCH NEXT FROM cursorUpsert
INTO @TABLE_NAME
END;
CLOSE cursorUpsert;
DEALLOCATE cursorUpsert;
IF @tranState = 1
AND XACT_STATE() = 1
COMMIT TRANSACTION tranState
END TRY
BEGIN CATCH
DECLARE @Error_Message VARCHAR(5000)
DECLARE @Error_Severity INT
DECLARE @Error_State INT
SELECT @Error_Message = ERROR_MESSAGE()
SELECT @Error_Severity = ERROR_SEVERITY()
SELECT @Error_State = ERROR_STATE()
IF @tranState = 1 AND XACT_STATE() <> 0
ROLLBACK TRANSACTION
RAISERROR (@Error_Message, @Error_Severity, @Error_State)
END CATCH
END
A:
I feel like dynamic sql and cursor don't like the OBJECT_ID function.
It has nothing to do with OBJECT_ID but most likely incorrect double qouting when using with dynamic string:
COLUMNPROPERTY(OBJECT_ID(''['' + col.TABLE_SCHEMA + ''].['' + col.TABLE_NAME + '']''), col.COLUMN_NAME, ''IsIdentity'')AS IS_IDENTITY
Anyway you should avoid manually adding [ and use QUOTENAME instead:
COLUMNPROPERTY(OBJECT_ID(QUOTENAME(col.TABLE_SCHEMA) + ''.'' + QUOTENAME(col.TABLE_NAME)), col.COLUMN_NAME, ''IsIdentity'')AS IS_IDENTITY
It's a common case and it would be really nice if here-strings/text quoting were supported.
A: Another slightly simpler way of getting this information would be something like..
Declare @Schema SYSNAME = 'dbo'
, @Table SYSNAME= 'Orders'
SELECT
name Column_Name
, system_type_name Data_Type
, max_length Max_Length
, is_identity_column Is_Identity_Column
, ISNULL(c.PK_Column,0) Is_Primary_Key_Column
FROM sys.dm_exec_describe_first_result_set (N'SELECT * FROM '+ @Schema +'.' + @Table, null, 0) r
OUTER APPLY (
SELECT 1 PK_Column
FROM INFORMATION_SCHEMA.KEY_COLUMN_USAGE s
WHERE OBJECTPROPERTY(OBJECT_ID(s.CONSTRAINT_SCHEMA + '.' + QUOTENAME(s.CONSTRAINT_NAME)), 'IsPrimaryKey') = 1
AND s.TABLE_NAME = @Table
AND s.TABLE_SCHEMA = @Schema
AND r.name COLLATE DATABASE_DEFAULT = s.COLUMN_NAME
) c(PK_Column)
You can put this code inside a function and simply call this function cross applying it with sys.tables catalog view.
The dynamic management view sys.dm_exec_describe_first_result_set has lots of other useful information too.
Lets suppose you create a function like this..
CREATE FUNCTION dbo.fn_get_Column_Info (
@Schema SYSNAME
, @Table SYSNAME)
RETURNS TABLE
AS
RETURN
(
SELECT
name Column_Name
, system_type_name Data_Type
, max_length Max_Length
, is_identity_column Is_Identity_Column
, ISNULL(c.PK_Column,0) Is_Primary_Key_Column
FROM sys.dm_exec_describe_first_result_set (N'SELECT * FROM '+ @Schema +'.' + @Table, null, 0) r
OUTER APPLY (
SELECT 1 PK_Column
FROM INFORMATION_SCHEMA.KEY_COLUMN_USAGE s
WHERE OBJECTPROPERTY(OBJECT_ID(s.CONSTRAINT_SCHEMA + '.' + QUOTENAME(s.CONSTRAINT_NAME)), 'IsPrimaryKey') = 1
AND s.TABLE_NAME = @Table
AND s.TABLE_SCHEMA = @Schema
AND r.name COLLATE DATABASE_DEFAULT = s.COLUMN_NAME
) c(PK_Column)
);
GO
Then your query to get all the information you need would be as simple as..
SELECT s.name , t.name, f.*
FROM sys.schemas s
INNER JOIN sys.Tables t ON s.schema_id = t.schema_id
CROSS APPLY dbo.fn_get_Column_Info(s.name , t.name) f;
No need for any cursor or dynamic SQL, unless you want to do this for all the databases on a server. But even if you had to do that for all databases it would be a much simpler cursor.
| |
doc_1433
|
If not, is there a convenient way to share properties between Maven and TestNG?
I want to write a nice test suite that can run on different continuous integration servers, pointing to different remote hosts (development, testing, staging, and production), without modification of the code.
I am defining credentials to a remote service in settings.xml:
<properties>
<my.host>http://my.company.com</my.host>
<my.username>my-un</my.username>
<my.password>my-pw</my.password>
</properties>
I'd like to be able to reference the properties in my unit/integration tests (src/test/resources) using:
<?xml version="1.0" encoding="UTF-8"?>
<beans....
<bean class="java.lang.String" id="un">
<constructor-arg value="${my.username}"/>
</bean>
<bean class="java.lang.String" id="pw">
<constructor-arg value="${my.password}"/>
</bean>
</beans>
Are there any options to doing this? Has anyone else tried this before? I am writing a lot of REST tests which require authorization in my tests.
Thanks!
A: Well, @seanizer is on the right track, but this can be simplified since you can already set your properties in maven. Set them in your pom and in your Spring config, all you need to do is get access to them, so simply changing your config like this will accomplish that.
<beans....
<context:property-placeholder />
<bean class="java.lang.String" id="un">
<constructor-arg value="${my.username}"/>
</bean>
<bean class="java.lang.String" id="pw">
<constructor-arg value="${my.password}"/>
</bean>
</beans>
The location is not required since the properties you are interested in are now set as system properties by maven. The PropertyPlaceholderConfigurer will work with those as well as any that are defined in a file, which is not required in this specific case. Please note, you will have to include the schema for context.
I would move them from your current location though as that is a global setting, your pom is project specific so I think that is where it belongs.
A: Sure. Maven resource filtering is the way to go.
Here's a sample configuration (files matching *-context.xml will be filtered, others won't):
<build>
<resources>
<resource>
<directory>src/main/resources</directory>
<filtering>true</filtering>
<includes>
<include>**/*-context.xml</include>
</includes>
</resource>
<resource>
<directory>src/main/resources</directory>
<filtering>false</filtering>
<excludes>
<exclude>**/*-context.xml</exclude>
</excludes>
</resource>
</resources>
</build>
A different approach would be to use the Properties Maven Plugin to write all project properties to a file and reference that file from Spring using the PropertyPlaceholderConfigurer mechanism.
Maven Configuration:
<build>
<plugins>
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>properties-maven-plugin</artifactId>
<version>1.0-alpha-2</version>
<executions>
<execution>
<phase>generate-test-resources</phase>
<goals>
<goal>write-project-properties</goal>
</goals>
<configuration>
<outputFile>${project.build.testOutputDirectory}/mavenproject.properties</outputFile>
</configuration>
</execution>
</executions>
</plugin>
</plugins>
</build>
Spring configuration:
<context:property-placeholder location="classpath:mavenproject.properties"/>
A: You can get Maven to substitute a particular value into your xml file when maven builds your project with a certain profile. So for example you would set up a test prfile in your maven pom and when you build with that profile the xml file in your jar will have the desired property in it. Look at this for an example.
A: You can define your values in property file and refer them using ${my-pw-key} in pom.xml and create a properties file by using plugin properties-maven-plugin
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>properties-maven-plugin</artifactId>
<version>1.0.0</version>
<executions>
<execution>
<phase>initialize</phase>
<goals>
<goal>read-project-properties</goal>
</goals>
<configuration>
<files>
<file>${basedir}/src/main/resources/env.properties</file>
</files>
</configuration>
</execution>
</executions>
</plugin>
Properties file (env.properties)
my-pw-key=abc&123 #your password here
And then run maven mvn initialize package (initialize to load values from property file)
| |
doc_1434
|
TypeError: self.parent.context.gotToManageCategoryPage is not a function
The error comes on clicking over button 'Manage Categories'.
Here is my code:
export class HomePage {
//Will be true if the user is Logged in.
public isLoggedIn = false;
//Will be true if the user is Admin.
public isAdmin = false;
constructor(public nav: NavController, public menu: MenuController, public authData: AuthData) {
this.nav = nav;
this.menu = menu;
this.authData = authData;
firebase.auth().onAuthStateChanged((user) => {
if(user) {
this.isLoggedIn = true; //Set user loggedIn is true;
var self = this;
firebase.database().ref('/userProfile/' + user.uid).once('value').then(function(snapshot) {
let userInfo = snapshot.val();
if(userInfo.isAdmin == true) {
self.isAdmin = true;
console.log(userInfo);
}
});
} else {
this.isLoggedIn = false; //Set user loggedIn is false;
}
});
}
//we are sending the admin to the ManageCategoryPage
goToManageCategoryPage() {
this.nav.push(ManageCategoryPage);
}
//we are sending the admin to the ManageProductsPage
goToManageProductsPage() {
this.nav.push(ManageProductsPage);
}
}
<ion-list *ngIf="isAdmin==true">
<button ion-item (click)="gotToManageCategoryPage()">
<ion-icon name="grid" item-left></ion-icon>
Manage Categories
</button>
<button ion-item (click)="gotToManageProductsPage()">
<ion-icon name="heart" item-left></ion-icon>
Manage Products
</button>
</ion-list>
| |
doc_1435
|
A: You can use the GitHub API to do this.
For example, you can request the amount of commits a repository has by requesting (GET) /repos/:owner/:repo/commits.
An example of a response is as follows (from the API documentation):
[
{
"url": "https://api.github.com/repos/octocat/Hello-World/commits/6dcb09b5b57875f334f61aebed695e2e4193db5e",
"sha": "6dcb09b5b57875f334f61aebed695e2e4193db5e",
"html_url": "https://github.com/octocat/Hello-World/commit/6dcb09b5b57875f334f61aebed695e2e4193db5e",
"comments_url": "https://api.github.com/repos/octocat/Hello-World/commits/6dcb09b5b57875f334f61aebed695e2e4193db5e/comments",
"commit": {
"url": "https://api.github.com/repos/octocat/Hello-World/git/commits/6dcb09b5b57875f334f61aebed695e2e4193db5e",
"author": {
"name": "Monalisa Octocat",
"email": "[email protected]",
"date": "2011-04-14T16:00:49Z"
},
"committer": {
"name": "Monalisa Octocat",
"email": "[email protected]",
"date": "2011-04-14T16:00:49Z"
},
"message": "Fix all the bugs",
"tree": {
"url": "https://api.github.com/repos/octocat/Hello-World/tree/6dcb09b5b57875f334f61aebed695e2e4193db5e",
"sha": "6dcb09b5b57875f334f61aebed695e2e4193db5e"
},
"comment_count": 0,
"verification": {
"verified": true,
"reason": "valid",
"signature": "-----BEGIN PGP MESSAGE-----\n...\n-----END PGP MESSAGE-----",
"payload": "tree 6dcb09b5b57875f334f61aebed695e2e4193db5e\n..."
}
},
"author": {
"login": "octocat",
"id": 1,
"avatar_url": "https://github.com/images/error/octocat_happy.gif",
"gravatar_id": "",
"url": "https://api.github.com/users/octocat",
"html_url": "https://github.com/octocat",
"followers_url": "https://api.github.com/users/octocat/followers",
"following_url": "https://api.github.com/users/octocat/following{/other_user}",
"gists_url": "https://api.github.com/users/octocat/gists{/gist_id}",
"starred_url": "https://api.github.com/users/octocat/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/octocat/subscriptions",
"organizations_url": "https://api.github.com/users/octocat/orgs",
"repos_url": "https://api.github.com/users/octocat/repos",
"events_url": "https://api.github.com/users/octocat/events{/privacy}",
"received_events_url": "https://api.github.com/users/octocat/received_events",
"type": "User",
"site_admin": false
},
"committer": {
"login": "octocat",
"id": 1,
"avatar_url": "https://github.com/images/error/octocat_happy.gif",
"gravatar_id": "",
"url": "https://api.github.com/users/octocat",
"html_url": "https://github.com/octocat",
"followers_url": "https://api.github.com/users/octocat/followers",
"following_url": "https://api.github.com/users/octocat/following{/other_user}",
"gists_url": "https://api.github.com/users/octocat/gists{/gist_id}",
"starred_url": "https://api.github.com/users/octocat/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/octocat/subscriptions",
"organizations_url": "https://api.github.com/users/octocat/orgs",
"repos_url": "https://api.github.com/users/octocat/repos",
"events_url": "https://api.github.com/users/octocat/events{/privacy}",
"received_events_url": "https://api.github.com/users/octocat/received_events",
"type": "User",
"site_admin": false
},
"parents": [
{
"url": "https://api.github.com/repos/octocat/Hello-World/commits/6dcb09b5b57875f334f61aebed695e2e4193db5e",
"sha": "6dcb09b5b57875f334f61aebed695e2e4193db5e"
}
]
}
]
Since you're using an API, you can request it each time the page is loaded, in a sense, automatically updating it.
| |
doc_1436
|
A: The first thing Macports has to do when updating is synchronising the ports.
These are in /opt/local/var/macports/sources/rsync.macports.org/release/tarballs/ports.
The modification date of this directory can tell you when the ports were last synchronised.
| |
doc_1437
|
from django.db import models
from django.contrib.auth.models import User
# Create your models here.
class Category(models.Model):
user = models.ForeignKey(User, on_delete=models.CASCADE, related_name="categories")
name = models.CharField(max_length=30, unique=True, primary_key=True)
class Todo(models.Model):
user = models.ForeignKey(User, on_delete=models.CASCADE, related_name='todos')
# TODO: Add confirmation before deleting category
category = models.ForeignKey(Category, on_delete=models.CASCADE,
related_name="todos_in_category", null=True)
item = models.CharField(max_length=50)
added = models.DateTimeField(auto_now_add=True)
completed = models.BooleanField(default=False)
Previously, Category's PK was the default id, however, I changed it to the name field. When I ran the migrations, i received the operational error. Thinking that it was perhaps due to a conflict between the existing id fields and the new primary key, I cleared the data in the database but with no success. Any ideas as to what could be the issue here? Thanks!
| |
doc_1438
|
I want to change the mouse cursor for one component of the JPanel.
But it seems that JList doesn't dispatch mouse movement / position to the childs, and my cursor is not updated.
Here is the tree of my JList :
JList
Custom Cell Renderer
Custom Cell (JPanel)
Components
My component with mouse cursor changed
How can I make the JList dispatch mouse postion ?
Thanks.
EDIT : some code :
public class JCOTSDisplay extends JList
{
public JCOTSDisplay()
{
setCellRenderer(new COTSListCellRenderer());
setModel(.....);
}
}
public class COTSListCellRenderer implements ListCellRenderer
{
@Override
public Component getListCellRendererComponent(final JList list, final Object value, final int index, final boolean isSelected, final boolean cellHasFocus)
{
return new JCOTSCell((COTS) value);
}
}
public class JCOTSCell extends JPanel
{
public JCOTSCell(final COTS cots)
{
initComponents();
}
private void initComponents()
{
JLabel lblUrl = new JLabel("<url>");
lblUrl.setCursort(new Cursort(Cursor.HAND_CURSOR));
}
}
A: Ok, so a JList is display only, it behave like if the components are rendered as an image, so any mouse listerner / actions will not be fired / dispatched.
I have replaced my JList with a JPanel with a GridLayout with 0 row and 1 column.
I have instantiated my model and my cell renderer and used them like the JList does.
And now everything works like I want.
Thanks.
A: If I understand correctly, you have a JList of items, some of which may be hyperlinks and you want a HAND cursor for just these items? As mentioned @kleopatra, the decoration of these items would be handled by the renderer, but the custom cursor would be handled by a listener on the JList.
import java.awt.*;
import java.awt.event.*;
import javax.swing.*;
@SuppressWarnings("unchecked")
public class JListHoverDemo implements Runnable
{
private JList jlist;
private Cursor defaultCursor;
public static void main(String args[])
{
SwingUtilities.invokeLater(new JListHoverDemo());
}
public void run()
{
Object[] items = new String[] {
"One", "Two", "http://www.stackoverflow.com",
"Four", "Five", "http://www.google.com", "Seven"
};
jlist = new JList(items);
jlist.setCellRenderer(new HyperlinkRenderer());
jlist.setSelectionMode(ListSelectionModel.SINGLE_SELECTION);
jlist.setVisibleRowCount(5);
jlist.addMouseMotionListener(new MouseMotionAdapter()
{
@Override
public void mouseMoved(MouseEvent event)
{
adjustCursor(event.getPoint());
}
});
defaultCursor = jlist.getCursor();
JFrame frame = new JFrame();
frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
frame.getContentPane().add(new JScrollPane(jlist));
frame.pack();
frame.setLocationRelativeTo(null);
frame.setVisible(true);
}
private void adjustCursor(Point point)
{
Cursor cursor = defaultCursor;
int index = jlist.locationToIndex(point);
if (index >= 0)
{
Object item = jlist.getModel().getElementAt(index);
if (isHyperlink(item))
{
cursor = Cursor.getPredefinedCursor(Cursor.HAND_CURSOR);
}
}
jlist.setCursor(cursor);
}
private boolean isHyperlink(Object item)
{
String text = item == null ? "" : item.toString();
return text.startsWith("http");
}
private class HyperlinkRenderer extends DefaultListCellRenderer
{
@Override
public Component getListCellRendererComponent(JList list, Object value,
int index, boolean isSelected, boolean hasFocus)
{
Component comp = super.getListCellRendererComponent(
list, value, index, isSelected, hasFocus);
if (isHyperlink(value))
{
setFont(comp.getFont().deriveFont(Font.ITALIC));
setForeground(Color.BLUE);
}
return comp;
}
}
}
A: Have you tried to use method setCursor on your Custome Cell ?
| |
doc_1439
| ||
doc_1440
|
https://developers.google.com/custom-search/
That´s how they generate search requests:
function l(w, x, y) {
var v = "feed/http://news.google.com/news?q=" + encodeURIComponent(w) + "&ie=UTF-8&ned=us&nolr=1&output=atom";
(from https://s3.feedly.com/web/28.0.980/js/10101_web02.js)
...but e.g. if I enter http://news.google.com/news?q=stackoverflow&ie=UTF-8&ned=us&nolr=1&output=atom I just get a normal search response from google, but no atom feed output.
| |
doc_1441
|
string a = "abc";
var result = a.Substring(1,0);
Console.WriteLine(result);
This code will be compiled and will write nothing to console.
What is the reason that this is allowed?
In which case can this be used?
UPDATE
I will clarify that I know what is this method and that in this case it is returning empty string. I am NOT asking why the result is empty. I am asking why it's allowed to do this.
A:
This code will be compiled and will write nothing to console.
First of all technically speaking, this statement is wrong: it writes a new line to the console. Thats where the Line in WriteLine comes in. But let's not be picky.
What is the reason that this is allowed?
There is no reason to disable it. Say for instance you want to make a string insertion method:
public static string StringInsert(String original, String toInsert, int index) {
return original.Substring(0,index)+toInsert+original.SubString(index);
}
Now our StringInsert method cannot know whether or first or second part will be empty (we could decide to insert at index 0). If we had to take into account that the first substring could have zero length, or the second, or both, then we would have to implement a lot of if-logic. Now we can use a one liner.
Usually one considers a string s a sequence of characters s=s0s1...sn-1. A substring from i with length j, is the string t=sisi+1...si+j-1. There is no ambiguity here: it is clear that if j is 0, then the result is the empty string. Usually you only raise an exception if something is exceptional: the input does not make any sense, or is not allowed.
A: Many things are allowed because there is no good reason to prohibit them.
Substrings of length zero are one such thing: in situations when the desired length is computed, this saves programmers who use your library from having to zero-check the length before making a call.
For example, let's say the task is to find a substring between the first and the last hash mark # in a string. Current library lets you do this:
var s = "aaa#bbb"; // <<== Only one # here
var start = s.IndexOf('#');
var end = s.LastIndexOf('#');
var len = end-start;
var substr = s.Substring(start, len); // Zero length
If zero length were prohibited, you would be forced to add a conditional:
var len = end-start;
var substr = len != 0 ? s.Substring(start, len) : "";
Checking fewer pre-requisites makes your library easier to use. In a way, Pythagorean theorem is useful in no small part because it works for the degenerate case, when the length of all three sides is zero.
A: The method you use has the following signature:
public string Substring(
int startIndex,
int length
)
where startIndex is
The zero-based starting character position of a substring in this
instance.
and length is
The number of characters in the substring.
That being said the following call is a pretty valid call
var result = a.Substring(1,0);
but is meaningless, since the number of charcaters in the substring you want to create is 0. This is why you don't get anything in the Console as an output.
Apparently, a call of Substring with passing the value of 0 as the value of second argument has no meaning.
A: From the documentation:
public string Substring(startIndex, length)
A string that is equivalent to the substring of length length that
begins at startIndex in this instance, or Empty if startIndex is equal
to the length of this instance and length is zero.
Basically, when you do someString.Substring(n, 0);, what you're getting back is a string that starts at n and has length 0.
The length parameter represents the total number of characters to extract from the current string instance.
Thats why nothing is printed to the console. The returned string is empty (has length 0).
EDIT:
Well, there is a limitation in place: the method throws an ArgumentOutOfRangeException if:
startIndex plus length indicates a position not within this instance.
-or-
startIndex or length is less than zero.
The reason they made the exception be thrown if length is less than zero and not if it is equal is most likely because, though pointless in most situations, requesting a string of 0 length is not an invalid request.
| |
doc_1442
|
A: You can use form.getAll("currentMemebers[]") to get all the values
var form = new FormData(document.querySelector('#modalForm'));
console.log(form.getAll("currentMembers[]"));
<form id="modalForm">
<input type="hidden" name="currentMembers[]" value="joe">
<input type="hidden" name="currentMembers[]" value="joe2">
<input type="hidden" name="currentMembers[]" value="joe3">
</form>
| |
doc_1443
|
<head>
<script scr="External Javascript" type="text/javascript"></script>
</head>
Now the problem is I want to restrict the loading of the file for a few users.
How can I achieve it here?
A: You can probably do this in pure JavaScript
<script>
if (condition) { //whatever that might be
document.write(unescape("%3Cscript src='sourcepath.js' type='text/javascript'%3E%3C/script%3E"));
}
</script>
However there's nothing preventing a user from figuring out the script path (since it's just written there) and then loading it manually.
You'd want to make sure you don't use this method to hide content that only privileged users can see.
| |
doc_1444
|
SELECT SUBSTRING(
(SELECT ', '+ Name AS 'data()' from Names for xml path(''))
,3, 255) as "MyList"
Is it possible to make linq to sql generate the concatenation in the database as a subquery not in the memory or with many additional queries?
| |
doc_1445
|
My goal is to calculate a certain measure for each point in list_of_peaks which is based the points in data closer to it than any other peak, i.e. I want to partition data halfway between each point in list_of_peaks.
My current (very slow) algorithm is this:
def measure(d,t_m,t_p):
radius = d[(d[:,0] > t_m)* (d[:,0] < t_p)]
return np.max(radius) - np.min(radius)
list_of_measures = []
for i in range(len(list_of_peaks)):
if i == 0:
list_of_measures.append(measure(data,data[0,0],(list_of_peaks[i+1] - list_of_peaks[i])/2+list_of_peaks[i]))
elif i == len(list_of_peaks) - 1:
list_of_measures.append(measure(data,list_of_peaks[i] - (list_of_peaks[i]-list_of_peaks[i-1])/2,data[-1,0]))
else:
list_of_measures.append(measure(data,list_of_peaks[i] - (list_of_peaks[i]-list_of_peaks[i-1])/2,(list_of_peaks[i+1] - list_of_peaks[i])/2+list_of_peaks[i]))
I haven't found any nice built-in numpy function that would serve my purpose, but I am pretty sure this can be done a LOT better, I just don't think see how.
A: You can use numpy.where() ( np.where())
x = np.array([
[0.1, 0.4, 0.7],
[0.3, 0.5, 0.2],
[0.9, 0.1, 0.8],
])
y = x[np.where(x[:,1] == 0.5)]
y
[[0.3 0.5 0.2]]
# or with multiple condition
y = x[np.where((x[:, 1] > 0.1 ) & (x[:, 1] < 0.5))]
y
[[0.1 0.4 0.7]]
A: As Brenlla pointed out, np.split can do most of what I want, so it came down to find the indices the fastest. Fortunately, there is also a built-in numpy function that is very fast for a time-series, since it is a sorted list by definition. The final map may have a faster solution, but the slow part of this algorithm was the splitting anyway:
splitter = np.ediff1d(list_of_peaks)/2 + list_of_peaks[:-1]
splitter_ind = np.searchsorted(data[:,0],splitter,side='right')
split_data = np.split(data[:,1],splitter_ind)
measures = np.array(list(map(lambda x: np.max(x) - np.min(x),split_data)))
| |
doc_1446
|
db.py
from sqlalchemy import create_engine
from sqlalchemy.pool import NullPool
def _create_engine(app):
impac_engine = create_engine(
app['DB'],
poolclass=NullPool # this setting enables NOT to use Pooling, preventing from timeout issues.
)
return impac_engine
def get_all_pos(app):
engine = _create_engine(app)
qry = """SELECT DISTINCT id, name FROM p_t ORDER BY name ASC"""
try:
cursor = engine.execute(qry)
rows = cursor.fetchall()
return rows
except Exception as re:
raise re
I'm trying to write some test cases by mocking this connection -
tests.py
import unittest
from db import get_all_pos
from unittest.mock import patch
from unittest.mock import Mock
class TestPosition(unittest.TestCase):
@patch('db.sqlalchemy')
def test_get_all_pos(self, mock_sqlalchemy):
mock_sqlalchemy.create_engine = Mock()
get_all_pos({'DB': 'test'})
if __name__ == '__main__':
unittest.main()
When I run the above file python tests.py, I get the following error -
"Could not parse rfc1738 URL from string '%s'" % name
sqlalchemy.exc.ArgumentError: Could not parse rfc1738 URL from string 'test'
Shouldn't mock_sqlalchemy.create_engine = Mock() give me a mock object and bypass the URL check.
A: Another option would be to mock your _create_engine function. Since this is a unit test and we want to test get_all_pos we shouldn't need to rely on the behavior of _create_engine, so we can just patch that like so.
import unittest
import db
from unittest.mock import patch
class TestPosition(unittest.TestCase):
@patch.object(db, '_create_engine')
def test_get_all_pos(self, mock_sqlalchemy):
args = {'DB': 'test'}
db.get_all_pos(args)
mock_sqlalchemy.assert_called_once()
mock_sqlalchemy.assert_called_with({'DB': 'test'})
if __name__ == '__main__':
unittest.main()
If you want to test certain results you will need to properly set all the corresponding attributes. I would recommend not chaining it into one call so that it is more readable as shown below.
import unittest
import db
from unittest.mock import patch
from unittest.mock import Mock
class Cursor:
def __init__(self, vals):
self.vals = vals
def fetchall(self):
return self.vals
class TestPosition(unittest.TestCase):
@patch.object(db, '_create_engine')
def test_get_all_pos(self, mock_sqlalchemy):
to_test = [1, 2, 3]
mock_cursor = Mock()
cursor_attrs = {'fetchall.return_value': to_test}
mock_cursor.configure_mock(**cursor_attrs)
mock_execute = Mock()
engine_attrs = {'execute.return_value': mock_cursor}
mock_execute.configure_mock(**engine_attrs)
mock_sqlalchemy.return_value = mock_execute
args = {'DB': 'test'}
rows = db.get_all_pos(args)
mock_sqlalchemy.assert_called_once()
mock_sqlalchemy.assert_called_with({'DB': 'test'})
self.assertEqual(to_test, rows)
| |
doc_1447
|
I am trying to run my Grails 3.3.2 app in a Jetty (v 9.1.4) container. I added the following to my build.gradle file.
ext['jetty.version'] = '9.1.4.v20140401'
And tried several combinations of the Spring Boot container starter.
compile "org.springframework.boot:spring-boot-starter-actuator"
provided "org.springframework.boot:spring-boot-starter-tomcat"
compile "org.springframework.boot:spring-boot-starter-jetty"
but I continue to get errors.
2018-08-15 17:07:49.375:INFO:oejs.Server:main: jetty-9.1.4.v20140401
2018-08-15 17:07:51.722:WARN:oejj.ObjectMBean:main:
java.lang.NoClassDefFoundError: org/eclipse/jetty/jmx/ObjectMBean
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClass(ClassLoader.java:763)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:467)
at java.net.URLClassLoader.access$100(URLClassLoader.java:73)
at java.net.URLClassLoader$1.run(URLClassLoader.java:368)
at java.net.URLClassLoader$1.run(URLClassLoader.java:362)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:361)
at org.eclipse.jetty.webapp.WebAppClassLoader.loadClass(WebAppClassLoader.java:427)
at org.eclipse.jetty.webapp.WebAppClassLoader.loadClass(WebAppClassLoader.java:389)
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClass(ClassLoader.java:763)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:467)
at java.net.URLClassLoader.access$100(URLClassLoader.java:73)
at java.net.URLClassLoader$1.run(URLClassLoader.java:368)
I can run the app when embedding Jetty or Tomcat but not from my container.
I build the war from my project as follows.
grails war
A: 2 things.
*
*You are missing jetty-jmx-<version>.jar dependency for that NoClassDefFoundError.
*Jetty 9.1.x is EOL (End of Life), consider upgrading. https://www.eclipse.org/jetty/documentation/current/what-jetty-version.html
| |
doc_1448
|
Confusion Matrix (vertical: actual; across: predicted) for F1-optimal threshold:
0 1 Error Rate
0 147857 234035 0.612830 =234035/381892
1 44782 271661 0.141517 =44782/316443
Totals 192639 505696 0.399260 =278817/698335
Any expert suggestions to treat the data and reduce the error is welcome.
Following approaches are tried and error is not found decreasing.
Approach 1: Selecting top 5 important variables via h2o.varimp(gbm)
Approach 2: Converting the negative normalized variable as zero and possitive as 1.
#Data Definition
# Variable Definition
#Independent Variables
# ID Unique ID for each observation
# Timestamp Unique value representing one day
# Stock_ID Unique ID representing one stock
# Volume Normalized values of volume traded of given stock ID on that timestamp
# Three_Day_Moving_Average Normalized values of three days moving average of Closing price for given stock ID (Including Current day)
# Five_Day_Moving_Average Normalized values of five days moving average of Closing price for given stock ID (Including Current day)
# Ten_Day_Moving_Average Normalized values of ten days moving average of Closing price for given stock ID (Including Current day)
# Twenty_Day_Moving_Average Normalized values of twenty days moving average of Closing price for given stock ID (Including Current day)
# True_Range Normalized values of true range for given stock ID
# Average_True_Range Normalized values of average true range for given stock ID
# Positive_Directional_Movement Normalized values of positive directional movement for given stock ID
# Negative_Directional_Movement Normalized values of negative directional movement for given stock ID
#Dependent Response Variable
# Outcome Binary outcome variable representing whether price for one particular stock at the tomorrow’s market close is higher(1) or lower(0) compared to the price at today’s market close
temp <- tempfile()
download.file('https://github.com/meethariprasad/trikaal/raw/master/Competetions/AnalyticsVidhya/Stock_Closure/test_6lvBXoI.zip',temp)
test <- read.csv(unz(temp, "test.csv"))
unlink(temp)
temp <- tempfile()
download.file('https://github.com/meethariprasad/trikaal/raw/master/Competetions/AnalyticsVidhya/Stock_Closure/train_xup5Mf8.zip',temp)
#Please wait for 60 Mb file to load.
train <- read.csv(unz(temp, "train.csv"))
unlink(temp)
summary(train)
#We don't want the ID
train<-train[,2:ncol(train)]
# Preserving Test ID if needed
ID<-test$ID
#Remove ID from test
test<-test[,2:ncol(test)]
#Create Empty Response SalePrice
test$Outcome<-NA
#Original
combi.imp<-rbind(train,test)
rm(train,test)
summary(combi.imp)
#Creating Factor Variable
combi.imp$Outcome<-as.factor(combi.imp$Outcome)
combi.imp$Stock_ID<-as.factor(combi.imp$Stock_ID)
combi.imp$timestamp<-as.factor(combi.imp$timestamp)
summary(combi.imp)
#Brute Force NA treatment by taking only complete cases without NA.
train.complete<-combi.imp[1:702739,]
train.complete<-train.complete[complete.cases(train.complete),]
test.complete<-combi.imp[702740:804685,]
library(h2o)
y<-c("Outcome")
features=names(train.complete)[!names(train.complete) %in% c("Outcome")]
h2o.shutdown(prompt=F)
#Adjust memory size based on your system.
h2o.init(nthreads = -1,max_mem_size = "5g")
train.hex<-as.h2o(train.complete)
test.hex<-as.h2o(test.complete[,features])
#Models
gbmF_model_1 = h2o.gbm( x=features,
y = y,
training_frame =train.hex,
seed=1234
)
h2o.performance(gbmF_model_1)
A: You've only trained a single GBM with the default parameters, so it doesn't look like you've put enough effort into tuning your model. I'd recommend a random grid search on GBM using the h2o.grid() function. Here is an H2O R code example you can follow.
| |
doc_1449
|
function MyF () {
var xmlhttp;
var txt,x,i=0;
var xx;
if (window.XMLHttpRequest){
xmlhttp=new XMLHttpRequest();
}
else{
xmlhttp=new ActiveXObject("Microsoft.XMLHTTP");
}
xmlhttp.send();
xmlhttp.onreadystatechange=function(){
if (xmlhttp.readyState == 4 && xmlhttp.status == 200) {
x = xmlhttp.responseXML.documentElement.getElementsByTagName("CD");
xx = x[i].getElementsByTagName("TITLE");
document.getElementById("test").innerHTML=xx;
}
}
xmlhttp.open("GET","cd_catalog.xml",true);
}
A: xmlhttp.responseXML.documentElement is the problem of your troubles. Just use xmlhttp.responseXML.getElementsByTagName and you should be fine.
| |
doc_1450
|
Code
Structs
//the pixel structure
typedef struct {
GLubyte r, g, b;
} pixel;
//the global structure
typedef struct {
pixel *data;
int w, h;
} glob;
glob global, original, temp;
my copy Code
void copyPic(glob *source, glob *dest){
int x,y;
dest -> w = source -> w;
dest -> h = source -> h;
dest -> data = (pixel *) malloc(dest->w * dest->h * sizeof(pixel*));
for (x=0; x < dest -> w; x++)
for (y=0; y < dest -> h; y++){
memcpy(dest->data[x+y*dest->w], source->data[x+y*dest->w], sizeof(pixel))
}
}
The idea: the glob struct holds the image width, height and pixel* data which is a pointer to an array of R,G,B values.
I want to copy the global to temporary so when i change the RGB of temp->data it doesnt affect the code that is currently executing and basing changing the RGB off the RGB of global->data.
New Code
void copyPic(glob *src, glob *dest){
dest -> w = src -> w;
dest -> h = src -> h;
dest -> data = (pixel *) malloc(dest->w * dest->h * sizeof(pixel));
memcpy(dest->data, src->data, sizeof(pixel) * dest->w * dest->h);
}
Do i have to free anything?
A: You are calling memcpy many times (w * h). I would suggest that you copy only once
memcpy(dest->data, source->data, sizeof(pixel) * w * h);
A: First: your API is not very cooperative. By assigning to dest->data, you could possibly overwrite its previous contents, and thus : leak memory. If you sole purpose is to duplicate the struct object (using a deep copy) it will be IMHO more robust to implement this as a dup operation like:
glob * dup_the_Pic(glob *src) {
glob * dst;
dest = malloc (sizeof *dst);
// maybe check for malloc failure here
memcpy (dst,src, sizeof *dst);
dst->data = malloc(dst->w * dst->h * sizeof *dst->data);
// maybe check for malloc failure here, too
memcpy(dst->data, src->data, dst->w * dst->h * sizeof *dst->data);
return dst;
}
To be called like:
glob *the_original, *the_copy;
the_original = read_thing_from_file( ...);
the_copy = dup_the_Pic(the_original);
do_stuff_with_the_pic(the_copy);
| |
doc_1451
|
I'm thinking along these lines:
select imageName, image from table where blah blah blah --(returns table of over 16,000 images).
--do whatever it takes to save each image in a file with filename = imageName
Any help is much appreciated.
A: You could use the bcp Utility to do this.
A: sounds like an ETL (extract transform load) process. in which case i recommend http://hibernatingrhinos.com/open-source/rhino-etl. to build the process and handle the operations. rhino.etl is an alternative to MSSqls DTS and SSIS.
the biggest thing I see is you don't want to load all 16000 records into memory at once and instead stream the records through one at a time to keep memory consumption low.
A: I ended up writing a C# application that uses code from http://www.redmondpie.com/inserting-in-and-retrieving-image-from-sql-server-database-using-c/ in order to write out the images. I wrote the app to grab the images in user-set batch sizes, and ended up writing out about 30,000 images in just over an hour. Thanks for the suggestions.
| |
doc_1452
|
var1 := "RP/0/RSP0/CPU0:,mnet-prd-hub"
without ,mnet-prd-hub it will remove the unwanted text but if I add something else to var1 it stops working.
I want to also remove mnet-prd-hub:
from mnet-prd-hub:/home/data/configs/current$/home/data/configs/current$
I've tried if var contains %clipboard"
I've tried clipboard contains %var1%
I've tried IfInstring with no luck.
So I am asking for someones tutelage.
I've tried for hours with no luck any help would be greatly appreciated.
SetTitleMatchMode, 2
#IfWinActive, ahk_class VTWin32
::ttwa::
var1 =
var1 :=
clipboard = ; empty clipboard
sleep 200
send !e {enter}
send s
send {enter 100}
sleep 100
Send {click}
sleep 100
send {click}
sleep 100
send {click}
sleep 10
MsgBox 1st %clipboard% %var1%
;var1 represents a Cisco 9k this script removes var1 puts the proper name in the Title Window
var1 := "RP/0/RSP0/CPU0:,mnet-prd-hub"
MsgBox 2nd %clipboard% %var1%
;if var1 in %clipboard%
IfInString, var1, %clipboard%
{
MsgBox 3rd %var1% %clipboard%
StringReplace, clipboard, clipboard, %var1%,, All
StringReplace, clipboard, clipboard, #,, all
MsgBox 4rd %var1% %clipboard%
var1 =
var1 :=
MsgBox 5th %var1% %clipboard%
}
else
{
MsgBox 6th %var1% %clipboard%
StringReplace, clipboard, clipboard, #, , all
MsgBox 6th %var1% %clipboard%
}
sleep 200
send !s
sleep 200
Send w
send %clipboard% {enter}
sleep 200
send !e s {enter}
#IfWinActive
return
A: I'm not exactly sure what you are looking for, but since you said you had problems adding a string to your variable, try this code, it will show you how to remove and add text to/from your variables:
var1 := "RP/0/RSP0/CPU0:,mnet-prd-hub"
MsgBox, %var1% `n`nClick OK to remove RP/0/RSP0/CPU0: from that string!
var1 := RemoveFromString(var1, "RP/0/RSP0/CPU0:")
MsgBox, %var1% `n`nClick OK to add mnet-prd-hub:/home/data/configs/current$/home/data/configs/current$ to that string!
var1 := AddToString(var1, "mnet-prd-hub:/home/data/configs/current$/home/data/configs/current$")
MsgBox, %var1% `n`nClick OK to remove mnet-prd-hub: from that string!
var1 := RemoveFromString(var1, "mnet-prd-hub:")
MsgBox, %var1%
RemoveFromString(string,stringToRemove) {
Return StrReplace(string, stringToRemove, "")
}
AddToString(string,stringToAdd) {
Return string stringToAdd
}
Edit:
So you want to see if the contents of var1 are to be found in the clipboard?
It can be done like this:
If InStr(Clipboard,var1) {
MsgBox, the contents of var1 were found in the clipboard.
}
Edit 2:
Like this?
If InStr(Clipboard,var1) {
Clipboard := RemoveFromString(Clipboard,var1)
}
RemoveFromString(string,stringToRemove) {
Return StrReplace(string, stringToRemove, "")
}
A: This works perfectly a genius helped me out on this.
Groupadd, TerminalWindows, ahk_class VTWin32
Groupadd, TerminalWindows, ahk_class PuTTY
SetTitleMatchMode, 2
#If WinActive("ahk_group TerminalWindows")
::ttwa::
uNames := "mnet-prd-hub:,RP/0/RSP0/CPU0:,RP/0/RSP1/CPU0:"
Clipboard= ; empty clipboard
sleep 200
send !e {enter}
send s
send {enter 100}
sleep 100
Send {click}
sleep 100
send {click}
sleep 100
send {click}
sleep 10
Loop, Parse, uNames, `,
clipboard := RegexReplace(clipboard, a_loopfield)
; msgbox % clipboard
if (WinActive("ahk_class VTWin32"))
{
sleep 200
send !s
sleep 200
Send w
send %clipboard% {enter}
sleep 200
send !e s {enter}
} else if (WinActive("ahk_class PuTTY"))
{
; steps for putty
}
#IfWinActive
return
| |
doc_1453
|
Right now with each api call, I get 7 items(meaning 1 image and 4 answers for each item). But amount of questions can change anytime, so I am trying to implement a way to automate the view creation process.
Based on what I searched this is what I was able to come up with it:
QuizMainViewController.h
#import <UIKit/UIKit.h>
@interface QuizMainViewController: UIViewController <UIScrollViewDelegate>
@property (nonatomic,retain)NSArray *results;
@property (nonatomic,assign)NSUInteger currentItemIndex;
@property(nonatomic,assign)NSUInteger numberOfItemsIntheResultsArray;
QuizMainViewController.m
#import "QuizMainViewController.m"
#import "AppDelegate.h"
#import "Base64.h"
#import "Item.h"
#import "SVProgressHUD.h"
#import <UIKit/UIKit.h>
@interface QuizMainViewController ()
@property(nonatomic,strong)IBOutlet UIView *frontView;
@property (nonatomic,strong)IBOutlet UIView *BackView;
//first view outlets
@property (strong, nonatomic) IBOutlet UIImageView *frontViewImage;
@property (nonatomic,weak) IBOutlet UIButton *frontViewAnswer1;
@property (nonatomic,weak) IBOutlet UIButton *frontViewAnswer2;
@property (nonatomic,weak)IBOutlet UIButton *frontViewAnswer3;
@property (nonatomic,weak) IBOutlet UIButton *frontViewAnswer4;
//second view outlets
@property (strong, nonatomic) IBOutlet UIImageView *backViewImage;
@property(nonatomic,weak) IBOutlet UIButton *backViewAnswer1;
@property(nonatomic,weak) IBOutlet UIButton *backViewAnswer2;
@property(nonatomic,weak) IBOutlet UIButton *backViewAnswer3;
@property(nonatomic,weak) IBOutlet UIButton *backViewAnswer4;
@implementation QuizMainViewController
//the method that I am making the api call
-(void)loadQuizObjectsMethod
{
// this is the part I make the call and try to set the views based on if the items.count is odd or even.
operation = [AFJSONRequestOperation JSONRequestOperationWithRequest:request success:^(NSURLRequest *req, NSHTTPURLResponse *response, id jsonObject)
{
NSArray *array = [jsonObject objectForKey:@"Items"];
NSLog(@"Item array: %@",array);
NSMutableArray *tempArray = [[NSMutableArray alloc] init];
for (NSDictionary * itemDict in array) {
Item *myResponseItems = [[Item alloc] initWithDictionary:itemDict];
[tempArray addObject:myResponseItems];
}
_results = tempArray;
//this is where I need help with
Item *item = [_results objectAtIndex:_numberOfItemsIntheResultsArray];
for (_numberOfItemsIntheResultsArray in _results) {
if (_currentItemIndex %2 == 0) {
_currentItemIndex
//implement the front view
}else if (_currentItemIndex %2 == 1)
{
//implement the back view
}
}
I appreciate your help.
A: You need to use UIPageViewController class in such situations. First create a class called QuestionViewController as follows
QuestionViewController.h
@property (strong, nonatomic) IBOutlet UIImageView *image;
@property (nonatomic,weak) IBOutlet UIButton *answer1;
@property (nonatomic,weak) IBOutlet UIButton *answer2;
@property (nonatomic,weak)IBOutlet UIButton *answer3;
@property (nonatomic,weak) IBOutlet UIButton *answer4;
@property (strong, nonatomic) Item *questionDetailsFromJSON;
QuestionViewController.m
-(void)viewDidLoad
{
self.image = self.questionDetailsFromJSON.image
//set all remaining outlets similar to this.
}
In the storyboard or xib embed a UIPageViewController in your QuizMainViewController using aUIContainer. Set QuizMainViewController as the delegate to the page view controller.
Change your QuizMainViewController.m as follows
@interface QuizMainViewController ()
@property(nonatomic,weak)IBOutlet UIPageViewController *pvc;
@implementation QuizMainViewController
//Implement all of the other methods
- (UIViewController *)pageViewController:(UIPageViewController *)pageViewController viewControllerAfterViewController:(UIViewController *)viewController
{
int index = [self.results indexOfObject:[(QuestionViewController *)viewController question]];
index++;
QuestionViewController *qvc = nil;
if(index<self.results.count)
{
qvc = //Instantiate from story board or xib
qvc.questionDetailsFromJSON = [self.results objectAtIndex:index];
}
return qvc;
}
- (UIViewController *)pageViewController:(UIPageViewController *)pageViewController viewControllerBeforeViewController:(UIViewController *)viewController
{
int index = [self.results indexOfObject:[(QuestionViewController *)viewController question]];
index--;
QuestionViewController *qvc = nil;
if(index>=0)
{
qvc = //Instantiate from story board or xib
qvc.questionDetailsFromJSON = [self.results objectAtIndex:index];
}
return qvc;
}
Follow this tutorial for further details.
A: The other poster's suggestion about using a page view controller is good. That would let you manage the pages with questions on them in a nice clean way.
As to your question of a variable number of fields on the form:
The simplest thing to do is to build your form for the maximum number of questions. Then, when you display it, hide those buttons/images that are not used for the current question.
You can create outlets for each button/image and then manipulate them directly, or you can assign tag numbers to them. If you use tag numbers it's a little easier to write code that shows the correct number of answer buttons.
Alternatively you could write code that creates and adds the buttons to the view controller hierarchy on the fly, but creating view objects directly in code is a little more work than doing it with IB. That's more than I can explain in detail in a forum post. For that, you should probably search for a tutorial on creating buttons through code rather than IB. It end up being like 4 or 5 lines of code per button.
A rough outline of the steps: You have to create a button using initWithFrame or the UIButton buttonWithType class method. You have to add a target/action to the button. You have to set it's title. You have to set it's frame, you have to save a pointer to it in an instance variable or an array, and you have to add it as a subview to your view controller's content view.
| |
doc_1454
|
The thing is that I want to create the menu dynamically, but I need to use a javascript object. How can I convert an array into the expected object?
Imagine that I have this array:
var arr = ["element1", "element2"];
and I want to convert it to:
{
element1: myFunction,
element2: myFunction
}
(myFunction is a function that it's defined in the scope)
Any suggestions? Thanks!
A: Use forEach function on array and create key value pair.
var arr = ["element1", "element2"];
var obj = {};
arr.forEach(function(item){
obj[item]= function myfunction(){};
});
console.log(obj);
A: Do something like this
arr.reduce((accumulator, value) => {
accumulator[value] = myFunction;
return accumulator;
}, {})
Basically what's happening, is that you are using the reduce function to do the same thing as what's happening in the solution with the for-loop.
| |
doc_1455
|
Website
Main Source [Empty]
Unit Test [Maven]
Sketches [Empty]
I have always been able to open this and have the structure shown properly. I've cloned this repository to a different computer now and am only able to open it the first time after clone. When I try to open the project the next day, it opens with only the Unit Test folder. Is there any way I can get it to consistently open with all three folders? It says it can't open my Main Source folder.
| |
doc_1456
|
that is, just plot the 13th column (last iteration) against the 14th column.
times1<-times[timeindex] ### this is a vector
mat_ind <- matrix(0, nrow=120,ncol=13)
mat_ind [,1] <- logBF_Apollo_1[,1] ### all logBF_Apollo_1,..., logBF_Apollo_1 are matrices
mat_ind [,2] <- logBF_Apollo_2[,1]
mat_ind [,3] <- logBF_Apollo_3[,1]
mat_ind [,4] <- logBF_Apollo_4[,1]
mat_ind [,5] <- logBF_Apollo_5[,1]
mat_ind [,6] <- logBF_Apollo_6[,1]
mat_ind [,7] <- logBF_Apollo_7[,1]
mat_ind [,8] <- logBF_Apollo_8[,1]
mat_ind [,9] <- logBF_Apollo_9[,1]
mat_ind [,10] <- logBF_Apollo_10[,1]
mat_ind [,11] <- logBF_Apollo_11[,1]
mat_ind [,12] <- logBF_Apollo_12[,1]
mat_ind [,13] <- logBF_Apollo_13[,1]
mat_ind<- data.frame(mat_ind,times1)
library(ggplot2)
library(gridExtra)
for(i in 1:13 ){
p[[i]] <-ggplot(mat_ind, aes(x = times1, y = mat_ind[,i])) +
geom_line() +ylab("") +coord_flip() +
xlab("")}
figure1 <- grid.arrange(p[[1]],p[[2]],p[[3]],p[[4]],p[[5]],p[[6]],p[[7]],p[[8]],p[[9]],
p[[10]],p[[11]],p[[12]],p[[13]], ncol = 13, nrow =1)
figure
The results just repeat the last iteration of the for loop.
that is the problem is that the output of for loop is:
p[[1]]=p[[2]]=...=p[[13]] ### this a problem.
Could you please let me know how I can fix it?
A: Limey's comment is probably the reason for your frustrations and his solution would work too. So I'm not going to focus on solving your problem, but rather I'd like to propose to use facet_wrap() instead of arranging all the plots with grid.arrange(), based on the impression that all your plots have similar structure anyway.
I'm assuming you got a set of variables that are similarly named, like this:
library(ggplot2)
times <- 1:120
logBF_Apollo_1 <- matrix(rnorm(prod(120, 2)), nrow = 120)
logBF_Apollo_2 <- matrix(rnorm(prod(120, 2)), nrow = 120)
logBF_Apollo_3 <- matrix(rnorm(prod(120, 2)), nrow = 120)
logBF_Apollo_4 <- matrix(rnorm(prod(120, 2)), nrow = 120)
logBF_Apollo_5 <- matrix(rnorm(prod(120, 2)), nrow = 120)
# To lazy to type the rest
Instead of manually copying the first column of each of those into a new matrix, we can program on the language to get a list of the first columns, which we can then arrange as a matrix.
first_columns <- lapply(1:5, function(i) {
sym <- as.symbol(paste0("logBF_Apollo_", i))
eval(sym)[, 1]
})
mat_ind <- do.call(cbind, first_columns)
What we'll do next is to format the names a bit, add the times column and convert the data from a wide format to a long format.
colnames(mat_ind) <- paste0("pretty_name_", seq_len(ncol(mat_ind)))
mat_ind <- data.frame(mat_ind, times)
df <- tidyr::pivot_longer(mat_ind, dplyr::starts_with("pretty_name"))
It then becomes pretty easy to generate 1 plot containing all panels.
ggplot(df, aes(times, value)) +
geom_line() +
facet_wrap(~ name, ncol = ncol(mat_ind) - 1) # -1 because of the times-column
| |
doc_1457
|
I'm using a provided assembly that has an interface I need to implement
public interface ICustomInterface
{
CustomType DoSomething(string name);
}
in my code I do like this:
public class MyClass: ICustomInterface
{
public MyClass()
{
}
// now I should implement the interface like this
public CustomType DoSomething(string name)
{
CustomType nType = new CustomType();
// do some work
return nType;
}
}
So far so good but in my implementation of the interface in the MyClass I need to make use of async await therefore the implementation should be like this:
public class MyClass: ICustomInterface
{
public MyClass()
{
}
// now I should implement the interface like this
public async Task<CustomType> DoSomething(string name)
{
CustomType nType = new CustomType();
await CallSomeMethodAsync();
// do some extra work
return nType;
}
}
And of course this doesn't work because it complains the Interface ICustomerInterface.DoSomething.... is not implemented.
Is there a way to override the interface implementation that accepts async await?
I cannot modify the provided assembly.
A: That's impossible. If the interface requires the operation to be computed synchronously, then the operation needs to be performed synchronously.
| |
doc_1458
|
I am pretty clear that if the derived class method is to be used, one can use the override keyword so that the base class method will be overriden by derived class. But I'm not sure about new, and sealed override.
A: public class Base
{
public virtual void SomeMethod()
{
Console.WriteLine("B");
}
}
public class Derived : Base
{
//Same method is written 3 times with different keywords to explain different behaviors.
//This one is Simple method
public void SomeMethod()
{
Console.WriteLine("D");
}
//This method has 'new' keyword
public new void SomeMethod()
{
Console.WriteLine("D");
}
//This method has 'override' keyword
public override void SomeMethod()
{
Console.WriteLine("D");
}
}
Now First thing First
Base b=new Base();
Derived d=new Derived();
b.SomeMethod(); //will always write B
d.SomeMethod(); //will always write D
Now the keywords are all about Polymorphism
Base b = new Derived();
*
*Using virtual in base class and override in Derived will give D(Polymorphism).
*Using override without virtual in Base will give error.
*Similarly writing a method (no override) with virtual will write 'B' with warning (because no polymorphism is done).
*To hide such warning as in above point write new before that simple method in Derived.
*new keyword is another story, it simply hides the warning that tells that the property with same name is there in base class.
*virtual or new both are same except
new modifier
*new and override cannot be used before same method or property.
*sealed before any class or method lock it to be used in Derived class and it gives a compile time error.
A: Any method can be overridable (=virtual) or not. The decision is made by the one who defines the method:
class Person
{
// this one is not overridable (not virtual)
public String GetPersonType()
{
return "person";
}
// this one is overridable (virtual)
public virtual String GetName()
{
return "generic name";
}
}
Now you can override those methods that are overridable:
class Friend : Person
{
public Friend() : this("generic name") { }
public Friend(String name)
{
this._name = name;
}
// override Person.GetName:
public override String GetName()
{
return _name;
}
}
But you can't override the GetPersonType method because it's not virtual.
Let's create two instances of those classes:
Person person = new Person();
Friend friend = new Friend("Onotole");
When non-virtual method GetPersonType is called by Friend instance it's actually Person.GetPersonType that is called:
Console.WriteLine(friend.GetPersonType()); // "person"
When virtual method GetName is called by Friend instance it's Friend.GetName that is called:
Console.WriteLine(friend.GetName()); // "Onotole"
When virtual method GetName is called by Person instance it's Person.GetName that is called:
Console.WriteLine(person.GetName()); // "generic name"
When non-virtual method is called the method body is not looked up - compiler already knows the actual method that needs to be called. Whereas with virtual methods compiler can't be sure which one to call, and it is looked up at runtime in the class hierarchy from down to up starting at the type of instance that the method is called on: for friend.GetName it looks starting at Friend class and finds it right away, for person.GetName class it starts at Person and finds it there.
Sometimes you make a subclass, override a virtual method and you don't want any more overrides down in the hierarchy - you use sealed override for that (saying you are the last one who overrides the method):
class Mike : Friend
{
public sealed override String GetName()
{
return "Mike";
}
}
But sometimes your friend Mike decides to change his gender and thus his name to Alice :) You could either change original code or instead subclass Mike:
class Alice : Mike
{
public new String GetName()
{
return "Alice";
}
}
Here you create a completely different method with the same name (now you have two). Which method and when is called? It depends on how you call it:
Alice alice = new Alice();
Console.WriteLine(alice.GetName()); // the new method is called, printing "Alice"
Console.WriteLine(((Mike)alice).GetName()); // the method hidden by new is called, printing "Mike"
When you call it from Alice's perspective you call Alice.GetName, when from Mike's - you call Mike.GetName. No runtime lookup is made here - as both methods are non-virtual.
You can always create new methods - whether the methods you are hiding are virtual or not.
This applies to properties and events too - they are represented as methods underneath.
A: By default a method cannot be overridden in a derived class unless it is declared virtual, or abstract. virtual means check for newer implementations before calling and abstract means the same, but it is guaranteed to be overridden in all derived classes. Also, no implementation is needed in the base class because it is going to be re-defined elsewhere.
The exception to the above is the new modifier. A method not declared virtual or abstract can be re-defined with the new modifier in a derived class. When the method is called in the base class the base method executed, and when called in the derived class, the new method is executed. All the new keywords allows you to do is to have two methods with the same name in a class hierarchy.
Finally a sealed modifier breaks the chain of virtual methods and makes them not overridable again. This is not used often, but the option is there. It makes more sense with a chain of 3 classes each deriving from the previous one
A -> B -> C
if A has an virtual or abstract method, that is overridden in B, then it can also prevent C from changing it again by declaring it sealed in B.
sealed is also used in classes, and that is where you will commonly encounter this keyword.
I hope this helps.
A: The virtual keyword is used to modify a method, property, indexer or event declaration, and allow it to be overridden in a derived class. For example, this method can be overridden by any class that inherits it:
Use the new modifier to explicitly hide a member inherited from a base class. To hide an inherited member, declare it in the derived class using the same name, and modify it with the new modifier.
This is all to do with polymorphism. When a virtual method is called on a reference, the actual type of the object that the reference refers to is used to decide which method implementation to use. When a method of a base class is overridden in a derived class, the version in the derived class is used, even if the calling code didn't "know" that the object was an instance of the derived class. For instance:
public class Base
{
public virtual void SomeMethod()
{
}
}
public class Derived : Base
{
public override void SomeMethod()
{
}
}
...
Base d = new Derived();
d.SomeMethod();
will end up calling Derived.SomeMethod if that overrides Base.SomeMethod.
Now, if you use the new keyword instead of override, the method in the derived class doesn't override the method in the base class, it merely hides it. In that case, code like this:
public class Base
{
public virtual void SomeOtherMethod()
{
}
}
public class Derived : Base
{
public new void SomeOtherMethod()
{
}
}
...
Base b = new Derived();
Derived d = new Derived();
b.SomeOtherMethod();
d.SomeOtherMethod();
Will first call Base.SomeOtherMethod , then Derived.SomeOtherMethod . They're effectively two entirely separate methods which happen to have the same name, rather than the derived method overriding the base method.
If you don't specify either new or overrides, the resulting output is the same as if you specified new, but you'll also get a compiler warning (as you may not be aware that you're hiding a method in the base class method, or indeed you may have wanted to override it, and merely forgot to include the keyword).
An overriding property declaration may include the sealed modifier. Use of this modifier prevents a derived class from further overriding the property. The accessors of a sealed property are also sealed.
| |
doc_1459
|
If you create a simple iframe with designMode set to on, type a couple of words, and then paste some complex content in (in this example, an email including signature directly from Outlook, or a formatted document from MS Word), then upon modifying the original content, characters typed become invisible until space is pressed or focus is lost from the iframe.
What is more bizarre about the behaviour is that if the user presses return before pasting their content, then the problem does not occur. Also, performing the same steps in a content editable DIV does not result in the same problem (however, putting an editable DIV inside a non-editable iframe does result in the problem, so the iframe definitely appears to be the problem).
The problem does not occur in Google Chrome or Mozilla Firefox and the only workaround seems to be to ensure that there is a line break between the initial content and the pasted content.
I have also just noticed that the behaviour can be replicated very easily by ensuring that the iframe scrolls. The following code will replicate:
<iframe id="theframe" style="height:50px;"></iframe>
<script>
$(function() {
var iframe = document.getElementById('theframe');
iframe.contentDocument.designMode = 'on';
});
</script>
I have also setup a fiddle at http://jsfiddle.net/z76x7gzL/ that reliably replicates the problem.
In the fiddle, if you type "this is a test" followed by a space and then pasting at least 2 lines from Word, then go back to the end of the "this is a test" line and attempt to type, the text will be invisible.
Any help much appreciated,
| |
doc_1460
|
The user of this web application should be able to obtain files by downloading them, however due to limited connectivity, he/she must be able to stop the download, exit the application, and when logged again, resume previous download(s). The same thing must be possible for uploads.
I have no much experience on that but customer has put some restrictions on us:
*
*He will deny any solution that consists in a new subapplication to be installed (like Java Web Start, things like that).
*He doesn't want to buy any certificates in case of Java Applets.
So, anyone whoever has made something like that, how could I perform that using only server-side Java EE programming? Which would be the best suitable for this case?
I appreciate any help.
A: You might find this tutorial from BalusC very helpful FileServlet Supporting Resume And Caching
| |
doc_1461
|
That is, instead of doing
ssize_t n = read(socket_fd, buffer, count);
which obviously requires the kernel to do a memcpy from the network buffer into my supplied buffer, I would do something like
ssize_t n = fancy_read(socket_fd, &buffer, count);
and on return have buffer pointing to non memcpy()'ed data received from the network.
A: Initially I thought AF_PACKET option to socket family can be of help, but it cannot.
Nevertheless it is possible technically, as there is nothing that prevents you from implementing kernel module handling system call that returns user mapped pointer to kernel data (even if it is not very safe).
There are couple of questions regarding the call you would like to have:
*
*Memory management. How would you know the memory can still be accessed after fancy_read system call returned?
*How would you tell the kernel to eventually free that memory? There would need to be some form of memory management in place and if you would like kernel to give you a safe pointer to nonmemcpy'ed memory than a lot of changes would need to go into the kernel to enable this feature. Just imagine that all that data couldn't be freed before you tell that it can, so kernel would need to keep track of all of these returned pointers.
These could be done in a lot of ways, so basically yes, this is possible but you need to take many things into consideration.
| |
doc_1462
|
Dim contract As String
Private Sub projectchart()
OpenConnection()
sql = "SELECT activityname, progress FROM '" & contract & "';"
dr = cmd.ExecuteReader
Chart2.Series("Progress").Points.Clear()
While dr.Read
Chart2.Series("Progress").Points.AddXY(dr("activityname"),dr("progress"))
End While
cmd.Dispose()
con.Close()
End Sub
When I run this code this error comes out
You have an error in your SQL syntax; check the manual that corresponds to your MariaDB server version for the right syntax to use near "c00101" at line 1
c00101 is the table name and is what the variable contract holds
But when I try to run the code in the format below, everything runs okay, the chart I'm trying to display data with works perfectly.
sql = "SELECT activityname, progress FROM c00101;"
I really have no clue why this happens. Can any help me out?
A: If you'd looked at the actual SQL, rather than the code that builds it, you'd have seen the difference, i.e. the single quotes. That's for specifying text values. Just as you put double quotes around literal Strings in VB but not around variables or other identifiers, so you put single quotes around text literals in SQL but around identifiers.
| |
doc_1463
|
('BCT','Department Of Electronics and Computer Engineering'),
('BEL','Deparment Of Electrical Engineering'),
('BCE','Deparment Of Civil Engineering'),
('SHE','Deparment Of Science and Humanities'),
('BME','Deparment Of Mechanical Engineering'),
)
department = models.CharField (max_length =10 ,choices = DEPARTMENT_CHOICE,blank=True,verbose_name="Department")
But if we add eg ZZZ in department, it will be added in database but I want to prevent that. How can I do it to prevent add items which are not in choice tuple?
A: The choices can only restrict what can be entered through a Django form. When you manually instantiate your object you can insert any value...
I guess the trick is to not allow the user to add any instance manually.
A: By default, the Django ORM does not calls the validators to check if the values make sense. ModelForms call the appropriate cleaning and validation [Django-doc].
You can however call .full_clean() [Django-doc] on the model object before saving it. This will raise an error in case the object contains invalid data. For example:
>>> m = Model(department='ZZZ')
>>> m.full_clean()
Traceback (most recent call last):
File "<console>", line 1, in <module>
File "/some/lib/python3.6/site-packages/django/db/models/base.py", line 1166, in full_clean
raise ValidationError(errors)
django.core.exceptions.ValidationError: {'department': ["Value 'ZZZ' is not a valid choice."]}
So this will raise a ValidationError in case the model object does not satisfies the constraints that are not enforced at database level.
Note that ORM calls still circumvent that: for example Model.objects.create(department='ZZZ') will ignore validation, as well as bulk updates, etc.
| |
doc_1464
|
function on_canvas_click(ev) {
x = ev.clientX - canvas1.offsetLeft-40;
y = ev.clientY - canvas1.offsetTop;
var canvas = document.getElementById("canvas1");
var context = canvas.getContext("2d");
context.fillStyle = "blue";
context.font = "bold 16px Arial";
context.fillText([chosenChord], [x], [y]);
}
What I want now is that if I click the draw text, I can drag it around the canvas.
One ostensible option would be creating the canvas in kineticjs and then using draggable:true or setDraggable(true), but I cannot figure out how to accomplish the main body of code in kinetic. Alternatively, perhaps there is a means of dragging the text without invoking kinetic.
A: I suggest you use double-clicks instead of single-clicks because KineticJS uses mousedown to indicate the start of a drag operation.
You can listen for stage double-clicks and then add draggable text like this:
$(stage.getContent()).on('dblclick', function (event) {
var pos=stage.getMousePosition();
var mouseX=parseInt(pos.x);
var mouseY=parseInt(pos.y);
var text=new Kinetic.Text({
x:mouseX,
y:mouseY,
text:"@:"+mouseX+"/"+mouseY,
fill:"blue",
draggable:true,
});
layer.add(text);
layer.draw();
});
Demo: http://jsfiddle.net/m1erickson/tLwSM/
| |
doc_1465
|
dt <- data.table(K=c("A","A","A","B","B","B"),Y=c("2010","2010","2011","2011","2011","2010"),Q1=c(2,3,4,1,3,4),Q2=c(3,3,3,1,1,1))
dt
K Y Q1 Q2
1: A 2010 2 3
2: A 2010 3 3
3: A 2011 4 3
4: B 2011 1 1
5: B 2011 3 1
6: B 2010 4 1
Let's say the values of K are persons, so we have two here. Quarters of year are stored in Q1 and Q2. Q2 is kind of a reference quarter-variable and the values always relate to year 2011). Now I want to pick those lines in dt, where, for each Person in K, Q1 lies in an interval of 4 quarters before the value of Q2.
An example:
Person A has value 3 in Q2, so values 2 (2011), 1(2011), 4(2010), and 3 (2010) should be picked. Considering this dataset, this would just be line 2. Value Q1=4 in line 3 is too large, value Q1=2 in line 1 is too small. For the second Person "B", only line 6 would be chosen. Not line 4, because this is the same quarter as in Q2 (I want only those smaller than the value in Q2, and line 5 is obviously greater than the value in Q2.
dt_new
K Y Q1 Q2
1: A 2010 3 3
2: B 2010 4 1
To sum up:
A value of say 4 in Q2 would mean: Pick all values in Q1 smaller than 4 where Y=2011, and pick all values in Q1 equal or greater than 4 (so just 4), where Y=2010. result: 3(2011),2(2011),1(2011),4(2010). This rule applies for all values of Q2. All this should be done for each Person.
I hope my problem got clear. I think there are many ways to solve this, but since I'm still learning data.table, I wanted to ask you for nice and elegant solutions (hopefully there are any).
Thanks
Edit:
Nearly found a solution: This gives me a logical vector. How can I extract the lines in the dataset?
setkey(dt,K)
dt[,(Q1<Q2 & Y=="2011")|(Q1>=Q2 & Y=="2010"),by="K"]
K V1
1: A FALSE
2: A TRUE
3: A FALSE
4: B FALSE
5: B FALSE
6: B TRUE
without doing this:
log <-dt[,(Q1<Q2 & Y=="2011")|(Q1>=Q2 & Y=="2010"),by="K"]$V1
dt[log]
A: This is a vanilla row-wise filtering so you don't need to (or should not) use grouping (by = "K"), just do:
dt[(Q1 < Q2 & Y == "2011") | (Q1 >= Q2 & Y == "2010"), ]
or maybe something more flexible if you are going to use ranges other than just 4 quarters:
quarter.diff <- function(Q1, Y1, Q2, Y2) {
4L * (as.integer(Y2) - as.integer(Y1)) +
(as.integer(Q2) - as.integer(Q1))
}
dt[quarter.diff(Q1, Y, Q2, Y2 = "2011") > 0L &
quarter.diff(Q1, Y, Q2, Y2 = "2011") <= 4L, ]
This is not just more general, it reads much better and makes the reference-year-is-2011 assumption explicit.
Notice how I was careful to convert all your columns into integers inside the quarter.diff function. Ideally, your year and quarter data would already be stored as integers rather than character or numeric.
Finally, if you are concerned that quarter.diff is called twice and speed is a concern, you can temporarily store the result as @Arun suggested in the comments:
dt[{qdiff <- quarter.diff(Q1, Y, Q2, Y2 = "2011")
qdiff > 0L & qdiff <= 4L}, ]
| |
doc_1466
|
Update: The readme for ZF 1.8 suggests that it's now possible to do in ZF 1.8 but I've been unable to track down where this is at in the documentation.
A: So after some research you have to use Zend_Search_Lucene_Interface_MultiSearcher. I don't see any mention of it in the documentation as of this writing but if you look at the actual class in ZF 1.8 it's straightforward t use
$index = new Zend_Search_Lucene_Interface_MultiSearcher();
$index->addIndex(Zend_Search_Lucene::open('search/index1'));
$index->addIndex(Zend_Search_Lucene::open('search/index2'));
$index->find('someSearchQuery');
NB it doesn't follow PEAR syntax so won'w work with Zend_Loader::loadClass
A: That's exactly how I handled search for huddler.com. I used multiple Zend_Search_Lucene indexes, one per datatype. For the "all" option, I simply had another index, which included everything from all indexes -- so when I added docs to the index, I added them twice, once to the appropriate "type" index, and once to the "all" index. Zend Lucene is severely underfeatured compared to other Lucene implementations, so this was the best solution I found. You'll find that Zend's port supports only a subset of the lucene query syntax, and poorly -- even on moderate indexes (10-100 MB), queries as simple as "a*", or quoted phrases fail to perform adequately (if at all).
When we brought a large site onto our platform, we discovered that Zend Lucene doesn't scale. Our index reached roughly 1.0 GB, and simple queries took up to 15 seconds. Some queries took a minute or longer. And building the index from scratch took about 20 hours.
I switched to Solr; Solr not only performs 50x faster during indexing, and 1000x faster for many queries (most queries finish in < 5ms, all finish in < 100ms), it's far more powerful. Also, we were able to rebuild our 100,000+ document index from scratch in 30 minutes (down from 20 hours).
Now, everything's in one Solr index with a "type" field; I run multiple queries against the index for each search, each one with a different "type:" filter query, and one without a "type:" for the "all" option.
If you plan on growing your index to 100+ MB, you receive at least a few search requests per minute, or you want to offer any sort of advanced search functionality, I strongly recommend abandoning Zend_Search_Lucene.
A: I don't how it integrates with Zend, but in Lucene one would use a MultiSearcher, instead of the usual IndexSearcher.
| |
doc_1467
|
manifest:
<?xml version="1.0" encoding="utf-8"?>
<manifest xmlns:android="http://schemas.android.com/apk/res/android"
package="com.test.testnfc"
android:versionCode="1"
android:versionName="1.0" >
<uses-sdk
android:minSdkVersion="14"
android:targetSdkVersion="14" />
<uses-permission android:name="android.permission.NFC" />
<uses-feature
android:name="android.hardware.nfc"
android:required="true" />
<application
android:allowBackup="true"
android:icon="@drawable/ic_launcher"
android:label="@string/app_name"
android:theme="@style/AppTheme" >
<activity
android:name="com.test.testnfc.TestNFCActivity"
android:label="@string/app_name" >
<intent-filter>
<action android:name="android.intent.action.MAIN" />
<category android:name="android.intent.category.LAUNCHER" />
</intent-filter>
</activity>
<activity
android:name="com.test.testnfc.NFCLoginActivity"
android:label="@string/app_name"
android:launchMode="singleTask" >
<intent-filter>
<action android:name="android.nfc.action.NDEF_DISCOVERED" />
<category android:name="android.intent.category.DEFAULT" />
</intent-filter>
</activity>
<activity android:name="com.test.testnfc.FirstActivity" >
</activity>
<activity android:name="com.test.testnfc.SecondActivity" >
</activity>
<activity android:name="com.test.testnfc.ThirdActivity" >
</activity>
<activity android:name="com.test.testnfc.FourthActivity" >
</activity>
<activity android:name="com.test.testnfc.FifthActivity" >
</activity>
</application>
</manifest>
I don't want this.Instead it should stay in the same activity.
Any idea about this?
A: Move the intent-filter for NFC detection into a Service, so that when a tag is discovered a Service is started instead of an Activity.
You can then use the Service to decide what you want to do and which Activity to launch.
A: Try set this properties in manifest for that activity:
android:clearTaskOnLaunch="true"
android:launchMode="singleTop"
Hope it will help.
| |
doc_1468
|
So, I need to be able to do something before an user leave a page, following a link or validating a form. How can I do that ?
I tried using window.onbeforeunload, but this trigger a alert, asking the user if he really want to leave the website. I would have to do two action before leaving a page:
*
*Recoring the cookies
*Launch a trigger that is going to set the trackers
window.onbeforeunload = function (e) {
// Record the cookie
// Launch a trigger to add the trackers
return true;
};
Maybe I'm making a mistake, if anyone could give me a hand
A: Another question, because I use a event to create my tracker, Is there some risk that the eventListener for my tracker, will not be executed before the page is reloaded ?
window.onbeforeunload = function (e) {
document.dispatchEvent(new Event('tracker.launch'));
};
My eventListener
document.addEventListener('tracker.launch', function() {
<!-- Google Analytics -->
(function(i,s,o,g,r,a,m){ i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){
(i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),
m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
})(window,document,'script','https://www.google-analytics.com/analytics.js','ga');
});
A: From the documentation:
When this event returns (or sets the returnValue property to) a value other than null or undefined, the user is prompted to confirm the page unload.
https://developer.mozilla.org/en-US/docs/Web/API/WindowEventHandlers/onbeforeunload
So you should not do a return true; at the end of this function.
A: You can always call your function before leaving the page using window.onbeforeunload():
window.onbeforeunload = function(){
recordCookie(); //function call for recording cookies
setTracker(); // function call for setting tracker
return; // returning null
};
Reason for the alert, asking the user if he really want to leave the website due to returning value other than null or undefined from onbeforeunload ends up asking for confirmation.
| |
doc_1469
|
Been reading quite a few blogs and everyone takes up different things.
The list I've come up with so far as follows:
*
*Pure DI/Poor mans DI
*Explicit register (DI container)
*Convention over configuration
Also the related types of the actual injections:
*
*Constructor injection (most used)
*Property injection
*Bastard injection
I'm not asking for an explanation for patterns/methods, rather I would just like the terms so i can read up on them myself!
| |
doc_1470
|
<messages>
<message name="Advertisement" msgtype="7" msgcat="app">
<field name="AdvId" required="Y" />
<field name="AdvTransType" required="Y" />
<field name="AdvRefID" required="N" />
<component name="Instrument" required="Y" />
<field name="AdvSide" required="Y" />
<field name="Quantity" required="Y" />
</message>
</messages>
<components>
<component name="Instrument">
<field name="Symbol" required="Y" />
<field name="SymbolSfx" required="N" />
<field name="SecurityID" required="N" />
<field name="SecurityIDSource" required="N" />
<group name="NoSecurityAltID" required="N">
<field name="SecurityAltID" required="N" />
<field name="SecurityAltIDSource" required="N" />
</group>
<field name="Product" required="N" />
<field name="CFICode" required="N" />
</component>
</components>
I want to iterate over each message and when I encounter a component tag, I want to replace the component xml with the fields/groups of the component.
I was attempting to use ReplaceChild, but it wasn't working as expected.
[void]LoadComponents() {
$this.xml.messages.message | ForEach-Object {
$m = $_
foreach ($node in $m.ChildNodes){
if ($node.LocalName -eq "component") {
Write-Host "Old Message: "
Write-Host ($m.field | Format-Table | Out-String)
$c = $this.GetComponent($node.name)
Write-Host "Component: "
Write-Host ($c.group | Format-Table | Out-String)
$m.ReplaceChild($c, $node)
Write-Host "New Message: "
Write-Host ($m.field | Format-Table | Out-String)
}
}
}
}
[System.Xml.XmlElement]GetComponent([string]$name) {
return $this.xml.components.component | Where-Object { $_.name -eq $name }
}
Edit:
I'm including the fact that I'm using powershell version 6.0.0.10 alpha on OSX, because apparently it's missing a lot of functionality.
A: Use XPath to select the component nodes directly and clone the replacement node in ReplaceChild.
$xml.SelectNodes('/root/messages/message/component') | ForEach {
$comp = $xml.SelectSingleNode('/root/components/component[@name="' + $_.name + '"]')
$message = $_.ParentNode
$message.ReplaceChild($comp.Clone(), $_)
}
The code is assuming the root node is root.
| |
doc_1471
|
class RestructureExamples < ActiveRecord::Migration[6.1]
def up
rename_column :examples, :old_column_name, :new_column_name
Example.reset_column_information
Example.find_each do |example|
unless example.new_column_name.nil?
if example.new_column_name == 100
example.new_column_name = 300
else
example.new_column_name = (example.new_column_name.to_f * 2.989).to_i
end
end
end
end
def down
# Do the reverse - left out for brevity
end
end
Even after adding reset_column_information (from the docs: "Resets all the cached information about columns, which will cause them to be reloaded on the next request."), this throws NoMethodError: undefined method `new_column_name'.
The record example still has old_cloumn_name. I was expecting that the column information is updated in the database and the model after calling rename_column together with reset_column_information.
I can see in the sources that rename_column performs an alter table SQL command. Checking the column names via SQL after rename_column reveals that the column is renamed correctly. So, I assume that only the model holds outdated information.
Probably there are several workarounds (use SQL instead of model, do the rename after converting the data, use two separate migrations, ...), but I would prefer my approach for comprehensibility.
A: You have to rename it inside change_table if you want it to work as you are using it now.
change_table :examples do |t|
t.rename :old_name, :new_name
end
Example.reset_column_information
# ....
A: I think I found an anwser to my own question. As suggested in the migrations guide, it is possible to use a local model in combination with reset_column_information:
class RestructureExamples < ActiveRecord::Migration[6.1]
class Example < ActiveRecord::Base
end
def up
rename_column :examples, :old_column_name, :new_column_name
Example.reset_column_information
Example.find_each do |example|
unless example.new_column_name.nil?
if example.new_column_name == 100
example.new_column_name = 300
else
example.new_column_name = (example.new_column_name.to_f * 2.989).to_i
end
end
end
end
def down
# Do the reverse - left out for brevity
end
end
This approach worked for me.
| |
doc_1472
|
git clone <myrepo>
git branch - indicates on master branch
git status - reports working directory is clean
git checkout <release-branch>
git status - shows a file has been deleted
Doing an ls <file> fails, indicates the file really is deleted.
gitk shows the red node indicating local uncommitted changes, but no changes were made in the repo, just a checkout to the branch. The parent commit has the file when looking at the tree in gitk.
git checkout -- <file> brings the file back.
However if I clone the same repo, but include the branch, then do a status, the file is present:
git clone -b <release-branch> <myrepo>
git status - reports working directory is clean
ls <file> - shows file exists
This is repeatable by several different people on different systems.
Why did changing branches cause this file to be deleted?
Here is the full command line sequence and output (names changed to protect the innocent):
$ rm -rf test-*
$ git clone <repo> test-master
Cloning into 'test-master'...
remote: Counting objects: 11478, done.
remote: Compressing objects: 100% (5918/5918), done.
remote: Total 11478 (delta 7856), reused 8429 (delta 5558)
Receiving objects: 100% (11478/11478), 7.48 MiB | 183 KiB/s, done.
Resolving deltas: 100% (7856/7856), done.
$ cd test-master
$ git branch
* master
$ git status
# On branch master
nothing to commit (working directory clean)
$ git checkout relbranch
Checking out files: 100% (112/112), done.
Branch relbranch set up to track remote branch relbranch from origin.
Switched to a new branch 'relbranch'
$ git status
# On branch relbranch
# Changes not staged for commit:
# (use "git add/rm <file>..." to update what will be committed)
# (use "git checkout -- <file>..." to discard changes in working directory)
#
# deleted: prtbatt.c
#
no changes added to commit (use "git add" and/or "git commit -a")
$ cd ..
$ git clone -b relbranch <repo> test-relbranch
Cloning into 'test-relbranch'...
remote: Counting objects: 11478, done.
remote: Compressing objects: 100% (5918/5918), done.
remote: Total 11478 (delta 7856), reused 8429 (delta 5558)
Receiving objects: 100% (11478/11478), 7.48 MiB | 178 KiB/s, done.
Resolving deltas: 100% (7856/7856), done.
$ cd test-relbranch
$ git branch
* relbranch
$ git status
# On branch relbranch
nothing to commit (working directory clean)
A: Turns out someone created the file twice on two different branches, without merging (yes necessary public floggings have taken place), and in one case used all uppercase letters and on the other branch used all lowercase letters. Once they realized the mistake, they deleted one of files and merged, then went on vacation. But they did the delete from Windows, which isn't case sensitive. When they did a merge, it got ride of both copies. After returning from vacation they couldn't find the file on their branch and this lead to figure out what was going on. After recreating the file and merging out, everything is fine.
| |
doc_1473
|
I have about 10K lines of MATLAB code with about 4 people working on it. Somewhere, someone has dumped a variable in a MATLAB script in the typical way:
foo
Unfortunately, I do not know what variable is getting output. And the output is cluttering out other more important outputs.
Any ideas?
p.s. Anyone ever try overwriting Standard.out? Since MATLAB and Java integration is so tight, would that work? A trick I've used in Java when faced with this problem is to replace Standard.out with my own version.
A: Ooh, I hate this too. I wish Matlab had a "dbstop if display" to stop on exactly this.
The mlint traversal from weiyin is a good idea. Mlint can't see dynamic code, though, such as arguments to eval() or string-valued figure handle callbacks. I've run in to output like this in callbacks like this, where update_table() returns something in some conditions.
uicontrol('Style','pushbutton', 'Callback','update_table')
You can "duck-punch" a method in to built-in types to give you a hook for dbstop. In a directory on your Matlab path, create a new directory named "@double", and make a @double/display.m file like this.
function display(varargin)
builtin('display', varargin{:});
Then you can do
dbstop in double/display at 2
and run your code. Now you'll be dropped in to the debugger whenever display is implicitly called by the omitted semicolon, including from dynamic code. Doing it for @double seems to cover char and cells as well. If it's a different type being displayed, you may have to experiment.
You could probably override the built-in disp() the same way. I think this would be analagous to a custom replacement for Java's System.out stream.
Needless to say, adding methods to built-in types is nonstandard, unsupported, very error-prone, and something to be very wary of outside a debugging session.
A: This is a typical pattern that mLint will help you find:
So, look on the right hand side of the editor for the orange lines. This will help you find not only this optimization, but many, many more. Notice also that your variable name is highlighted.
A: If you have a line such as:
foo = 2
and there is no ";" on the end, then the output will be dumped to the screen with the variable name appearing first:
foo =
2
In this case, you should search the file for the string "foo =" and find the line missing a ";".
If you are seeing output with no variable name appearing, then the output is probably being dumped to the screen using either the DISP or FPRINTF function. Searching the file for "disp" or "fprintf" should help you find where the data is being displayed.
If you are seeing output with the variable name "ans" appearing, this is a case when a computation is being done, not being put in a variable, and is missing a ';' at the end of the line, such as:
size(foo)
In general, this is a bad practice for displaying what's going on in the code, since (as you have found out) it can be hard to find where these have been placed in a large piece of code. In this case, the easiest way to find the offending line is to use MLINT, as other answers have suggested.
A: I like the idea of "dbstop if display", however this is not a dbstop option that i know of.
If all else fails, there is still hope. Mlint is a good idea, but if there are many thousands of lines and many functions, then you may never find the offender. Worse, if this code has been sloppily written, there will be zillions of mlint flags that appear. How will you narrow it down?
A solution is to display your way there. I would overload the display function. Only temporarily, but this will work. If the output is being dumped to the command line as
ans =
stuff
or as
foo =
stuff
Then it has been written out with display. If it is coming out as just
stuff
then disp is the culprit. Why does it matter? Overload the offender. Create a new directory in some directory that is on top of your MATLAB search path, called @double (assuming that the output is a double variable. If it is character, then you will need an @char directory.) Do NOT put the @double directory itself on the MATLAB search path, just put it in some directory that is on your path.
Inside this directory, put a new m-file called disp.m or display.m, depending upon your determination of what has done the command line output. The contents of the m-file will be a call to the function builtin, which will allow you to then call the builtin version of disp or display on the input.
Now, set a debugging point inside the new function. Every time output is generated to the screen, this function will be called. If there are multiple events, you may need to use the debugger to allow processing to proceed until the offender has been trapped. Eventually, this process will trap the offensive line. Remember, you are in the debugger! Use the debugger to determine which function called disp, and where. You can step out of disp or display, or just look at the contents of dbstack to see what has happened.
When all is done and the problem repaired, delete this extra directory, and the disp/display function you put in it.
A: You could run mlint as a function and interpret the results.
>> I = mlint('filename','-struct');
>> isErrorMessage = arrayfun(@(S)strcmp(S.message,...
'Terminate statement with semicolon to suppress output (in functions).'),I);
>>I(isErrorMessage ).line
This will only find missing semicolons in that single file. So this would have to be run on a list of files (functions) that are called from some main function.
If you wanted to find calls to disp() or fprintf() you would need to read in the text of the file and use regular expresions to find the calls.
Note: If you are using a script instead of a function you will need to change the above message to read: 'Terminate statement with semicolon to suppress output (in scripts).'
A: Andrew Janke's overloading is a very useful tip
the only other thing is instead of using dbstop I find the following works better, for the simple reason that putting a stop in display.m will cause execution to pause, every time display.m is called, even if nothing is written.
This way, the stop will only be triggered when display is called to write a non null string, and you won't have to step through a potentially very large number of useless display calls
function display(varargin)
builtin('display', varargin{:});
if isempty(varargin{1})==0
keyboard
end
A: A foolproof way of locating such things is to iteratively step through the code in the debugger observing the output. This would proceed as follows:
*
*Add a break point at the first line of the highest level script/function which produces the undesired output. Run the function/script.
*step over the lines (not stepping in) until you see the undesired output.
*When you find the line/function which produces the output, either fix it, if it's in this file, or open the subfunction/script which is producing the output. Remove the break point from the higher level function, and put a break point in the first line of the lower-level function. Repeat from step 1 until the line producing the output is located.
Although a pain, you will find the line relatively quickly this way unless you have huge functions/scripts, which is bad practice anyway. If the scripts are like this you could use a sort of partitioning approach to locate the line in the function in a similar manner. This would involve putting a break point at the start, then one half way though and noting which half of the function produces the output, then halving again and so on until the line is located.
A: I had this problem with much smaller code and it's a bugger, so even though the OP found their solution, I'll post a small cheat I learned.
1) In the Matlab command prompt, turn on 'more'.
more on
2) Resize the prompt-y/terminal-y part of the window to a mere line of text in height.
3) Run the code. It will stop wherever it needed to print, as there isn't the space to print it ( more is blocking on a [space] or [down] press ).
4) Press [ctrl]-[C] to kill your program at the spot where it couldn't print.
5) Return your prompt-y area to normal size. Starting at the top of trace, click on the clickable bits in the red text. These are your potential culprits. (Of course, you may need to have pressed [down], etc, to pass parts where the code was actually intended to print things.)
A: You'll need to traverse all your m-files (probably using a recursive function, or unix('find -type f -iname *.m') ). Call mlint on each filename:
r = mlint(filename);
r will be a (possibly empty) structure with a message field. Look for the message that starts with "Terminate statement with semicolon to suppress output".
| |
doc_1474
|
i am making an application on Flask using the SQLAlchemy ORM
now the problem is, i may have messed up the creating of a table user;
in the models.py the code looks like,
class User(db.Model):
id = db.Column(db.Integer, primary_key=True)
name = db.Column(db.String(50))
username = db.Column(db.String(25), unique=True)
password = db.Column(db.String(50))
email = db.Column(db.VARCHAR(50), unique=True)
registerdate= db.Column(db.DateTime)
def __init__(self,
name,
username,
password,
email,
registerdate=None):
self.name = name
self.username = username
self.password = password
self.email = email
if registerdate is None:
registerdate = datetime.utcnow()
self.registerdate = registerdate
Now, the error is something like
OperationalError: table User has no column named user_name
this is because i messed up the table creation, creating the table with the column "user_name" first, when it gave me an error related to the underscores, i tried to modify the code but instead ran into another error...
so how do i delete the previous 'User' table in SQL Alchemy ORM without using the usual sqlite3 syntax and commands?
P.S : I am using the ubuntu 16.04 python terminal, no IDE like Atom or Pycharm and stuff ...
A: Alright ! so after being confused by 'engine' and a ton of other technicalities regarding sqlite,
i finally have the solution !
First enter python terminal through Ubuntu Terminal... and do,
from YourProject import app,db
from YourProject.models import Table1,Table2....User #import the classes whose data you want to manipulate
import sqlite3
con = sqlite3.connect('ABSOLUTE PATH TO YOUR DB FILE')
c = con.cursor()
c.execute("DROP TABLE User;")
db.session.commit() #i am a bit unsure about this one
and that's it, that's how i deleted my troublesome 'User' table. I then created a new one and it works wonders !
Also, my User Class code was previously not too well formatted, as in
def __init__(self,name,username,password,email,registerdate=None):
self.name = name
self.username = username
self.password = password
self.email = email
if registerdate is None:
registerdate = datetime.utcnow()
self.registerdate = registerdate
notice, how the Class parameters weren't in the 'stairs' formation before? that also bugged me ALOT in creating the table and adding data to it. so make sure you take care of that.
This is an issue beginners might come across and find daunting, hope i help someone out !
Cheers !
| |
doc_1475
|
A: When I worked with <frame> it has taking a long time ago (seven or more years).
Because you didn't post any html sourcode :-|, I created a theoretical situation:
<html>
<head>
<script type="text/javascript">
function nullRand () {
//document.getElementById("frameBid").src = "myNewInternalPage.html";
document.getElementById("frameBid").src = document.getElementById("frameAid").src;
}
</script>
</head>
<frameset cols="25,*" frameborder="no" border="0" framespacing="0" framepadding="0">
<frame id="frameAid" name="frameAname" src="myHtmlFile.html" noresize>
<frame id="frameBid" name="frameBname" src="" noresize>
</frameset>
</html>
SIn my theoretical file 'myNewInternalPage.html' you have to instert the method call
<a href="#" onMouseOver="parent.nullRand()">An link</a>
Note: frame is not approiate to show external content. That's why iframe exists.
Update:
After a hard fight with the OP I got something to my hands. Here is a solution:
1st: Replace the old Javascript code from 'nullRand()' with the new one
function nullRand (linkObject)
{
document.getElementById("frameBid").src = linkObject.href; // The new way to access to a frame
//top.frames["center"]..src = linkObject.href; <-- The old way to access to a frame */
}
2nd: Modyfiy the a tag -where the radio streams are listed- as follow:
<a href="the Url from your radiostream" ... onMouseOver="parent.nullRand(this)">...</a>
So it should work. In my eyes you should think about a new concept for your task.
| |
doc_1476
|
I get the error below. I run the same command directly in unix and it works. I have searched up and down. The python environment in unix is fine.
Error
Traceback (most recent call last):
File "Mixx.py", line 44, in <module>
call(["mi_xx", "-s", "20141215","-e","20150121","-p",'TX%_XX%',"-f","test","-i","-x","-d"])
File "/bb/util/common/ActivePythonEE_2.6.2_32bit/lib/python2.6/subprocess.py", line 444, in call
return Popen(*popenargs, **kwargs).wait()
File "/bb/util/common/ActivePythonEE_2.6.2_32bit/lib/python2.6/subprocess.py", line 595, in __init__
errread, errwrite)
File "/bb/util/common/ActivePythonEE_2.6.2_32bit/lib/python2.6/subprocess.py", line 1092, in _execute_child
raise child_exception
OSError: [Errno 2] No such file or directory"
| |
doc_1477
|
Note that i wrote my web application in java and deployed to weblogic at linux server.
Notes:
what encoding does your text file use? windows-1256.
How are you reading it? InputStream.
How are you writing it? redirect the result of input buffer to jsp page by request.
What is the encoding of your JSP? windows-1256.
Have you debugged into the code to find out whether the problem is the reading or the writing? it's work fine when i deployed to windows server
A: I am sorry guys, i found the solution after many tries, the solution is
int BUFFER_SIZE = 8192;
BufferedReader br = new BufferedReader(new InputStreamReader(in, "windows-1256"), BUFFER_SIZE);
| |
doc_1478
|
Thank you in advance for the help.
A: First, you have to open android studio->Tools->Android->SDK Manager
See the screenshot below
And Applying at the time, it will ask the license. You have to accept and download the API 26. After it will work. :)
A: Unhandle Promise rejection comes mainly with Cordova version @8.0.0.
Could you downgrade the version to 7 to 6 and try to build ?
| |
doc_1479
|
is it possible to test a service, without having to initialize all constructor services?
Because i would like to use the service class as a mock (like this),
but initializing all services would be a lot of extra work.
| |
doc_1480
|
Now I am working on its second part that is to remove the randomly added characters and digits from the obfuscated String.
My code works for removing one random character and digit from the string (when encryption_str is set to 1) but for removing two, three .. nth .. number of characters (when encryption_str is set to 2, 3 or n), I don't understand how to modify it.
My Code:
import string, random
def decrypt():
encryption_str = 2 #Doesn't produce correct output when set to any other number except 1
data = "osqlTqlmAe23h"
content = data[::-1]
print("Modified String: ",content)
result = []
result[:0] = content
indices = []
for i in range(0, encryption_str+3): #I don't understand how to change it
indices.append(i)
for i in indices:
del result[i+1]
message = "".join(result)
print("Original String: " ,message)
decrypt()
Output for Encryption level 1 (Correct Output)
Output for Encryption level 2 (Incorrect Output)
A: Why not try some modulo arithmetic? Maybe with your original string, you try something like:
''.join([x for num, x in enumerate(data) if num % encryption_str == 0])
A: That's easy to append chars, that's a bit more difficult to remove them, because that changes the string length and the position of the chars.
But there is an easy way : retrieve the good ones, and for that you just need to iterate with the encryption_str+1 as step (that avoid adding an if on the indice)
def decrypt(content, nb_random_chars):
content = content[::-1]
result = []
for i in range(0, len(content), nb_random_chars + 1):
result.append(content[i])
message = "".join(result)
print("Modified String: ", content)
print("Original String: ", message)
# 3 lines in 1 with :
result = [content[i] for i in range(0, len(content), nb_random_chars + 1)]
Both will give hello
decrypt("osqlTqlmAe23h", 2)
decrypt("osqFlTFqlmFAe2F3h", 3)
A: How about a list comprehension (which is really just a slightly more compact notation for @azro's answer)?
result = content[0::(encryption_str+1)]
That is, take every encryption_str+1'd character from content starting with the first.
| |
doc_1481
|
When i try to run ffmpeg i get this error message:
but the catch is that i only get the message is i run ffmpeg from my server application or from the folder where the server application is located e:\StreamServer2\
If i run from an other folder eg. e:\StreamServer2.BAK\ there are no problem.
the same goes for when i run the StreamServer located in StreamServer2.BAK
The ffmpeg.exe is a copy of the same file.
I have tried to delete ffmpeg.exe and copy a new to the folder. As i assumed that it was ffmpeg.exe that was damaged when we moved the server to our new data center. but that did not make any difference.
I am running ffmpeg with thise parameters:
ffmpeg.exe -analyzeduration 0 -fpsprobesize 0 -rtsp_transport tcp -i rtsp://addrOnIpCam -g 52 -movflags frag_keyframe+empty_moov -b:v 64k -q 5 test.avi
and from server application:
ffmpeg.exe -analyzeduration 0 -fpsprobesize 0 -rtsp_transport tcp -i rtsp://addrOnIpCam -g 52 -movflags frag_keyframe+empty_moov -b:v 64k -q 5 -an -
| |
doc_1482
|
A:
[client]
default-character-set=charset_name
source : http://dev.mysql.com/doc/refman/5.0/en/charset-configuration.html
more on charset configuration - http://dev.mysql.com/doc/refman/5.0/en/mysql-command-options.html#option_mysql_default-character-set
| |
doc_1483
|
A: You can install the tools for tha evaluation of SQl Server and the DB for SQL Server Express and then use Profiler to log all the queries made against the Express instance.
Check this out.
| |
doc_1484
|
1 <%= semantic_form_for @vendor do |f| %>
2 <% f.inputs do %>
3 <%= f.input :name %>
4 <%= f.input :tag_list %>
5 <% end %>
6 <%= f.buttons %>
7 <% end %>
Vendor.rb is acts_as_taggable_on.
However, when I enter strings into the field for tag_list, nothing gets stored when I go back into the console to check on vendor.tags.
What can I do to allow input of tags from a form?
10 def new
11 @vendor = Vendor.new
12 end
13
14 def create
15 @vendor = Vendor.new(params[:vendor])
16 if @vendor.save
17 flash[:notice] = "Successfully created vendor."
18 redirect_to @vendor
19 else
20 render :action => 'new'
21 end
22 end
A: Are you using attr_accessible in your model?
If yes, add :tag_list to it.
For Example:
attr_accessible :attr1, :tag_list
| |
doc_1485
|
from pprint import pprint
from PalmDB.PalmDatabase import PalmDatabase
pdb = PalmDatabase()
with open('testdb.pdb','rb') as data:
pdb.fromByteArray(data.read())
pprint(dir(pdb))
pprint(pdb.attributes)
print pdb.__doc__
#print pdb.records
print pdb.records[10].toXML()
which gives me the xml representation of a record (?) with some nasty long payload attribute, which doesn't resemble any kind of human-readable text to me. I just want to read the contents of the pdb file. Is there a guide/tutorial for this library? What would you do to figure out the proper way to make things done in my situation?
A: There are two problems with the PalmDB module. The first is that it comes with almost no documentation. The other is that in order to do anything useful with the records in the database you need to figure out the binary structure for the particular record type you're dealing with (it's different for each type) and unpack it yourself. I believe the package author did some work with the ToDo format, but none of the others as far as I know. What I needed was something to unpack Palm address records, so I rolled my own module. I posted it [1] so you can take a look get an idea of what's involved. If it's the address book records you're interested in, you're in luck. I created it several years ago, so I don't remember all the details of what I had to do, but I did update it to work with the current version [2] of PalmDB, which completely broke all code using older versions. Hope it's useful!
[1] http://pastebin.com/f75a93f48
[2] 1.8.1
| |
doc_1486
|
A
3
2
4
1
7
8
what I want to achieve is to place (sort) the numbers on the column such that each cell value will take its position according to excel row numbering. which means:
I should have something like this
1
2
3
4
7
A: If it's as simple as stated, this should work.
In column B, say:
=IF(COUNTIF(A:A,ROW())>0,ROW(),"")
Note: It does not account for duplicates, or anything complicated really.
Further Note: To clarify, all this does is check if the row that the formula is in, is in column A. If it is, then it returns the current row.
| |
doc_1487
|
I would like to add an image to x1 - the JPanel with Grid Layout.
For this I would have to add the paintComponent method and use the drawImage() method. My question is how will Eclipse know which panel to add the image to? I's prefer not to create a separate class for x1, I did that before and it just didn't work out right and I rather not go down that incredibly frustrating road again, I'm sorry!
I have considered using a Glass Pane however I would no longer be able to see the images of the JLabels - which is really important.
I think adding the image to the background of the JPanel will be best because I also want to have a button which shows/hides the grid lines - the borders of the JLabels.
I hope I'm making sense.
Below is the code. I understand it's a lot of code in one class. I did have it in two separate classes but it just didn't work me. I find this much easier. I hope you don't mind
package roverMars;
import java.awt.BorderLayout;
import java.awt.Color;
import java.awt.Component;
import java.awt.Dimension;
import java.awt.GridLayout;
import java.awt.event.ActionEvent;
import java.awt.event.ActionListener;
import java.awt.image.BufferedImage;
import java.io.IOException;
import javax.imageio.ImageIO;
import javax.swing.BorderFactory;
import javax.swing.Box;
import javax.swing.BoxLayout;
import javax.swing.ImageIcon;
import javax.swing.JButton;
import javax.swing.JCheckBox;
import javax.swing.JFrame;
import javax.swing.JLabel;
import javax.swing.JPanel;
import javax.swing.JScrollPane;
import javax.swing.JTextField;
import javax.swing.border.Border;
import javax.swing.border.EtchedBorder;
public class MenuPanel extends JPanel {
private static final long serialVersionUID = -3928152660110599311L;
public JPanel frame, textfield, buttons, cpPanel;
public JTextField Commands;
public JButton Plot, Submit, Undo;
public JLabel Position, cpLabel;
public Border loweredetched;
public JCheckBox gridLines;
public SubmitButton sub;
static final int rows = 100, columns = 100;
// ******IMAGES******
static BufferedImage North, South, West, East;
public void ImageLoader() {
try {
North = ImageIO.read(this.getClass().getResource("North.png"));
South = ImageIO.read(this.getClass().getResource("South.png"));
West = ImageIO.read(this.getClass().getResource("West.png"));
East = ImageIO.read(this.getClass().getResource("East.png"));
} catch (IOException e) {
// TODO Auto-generated catch block
System.out.println("Error occured: " + e);
e.printStackTrace();
}
}
// ******IMAGES******
public void createMenu(JPanel p) {
// Text Field Panel
Commands = new JTextField(20);
textfield = new JPanel();
textfield.setPreferredSize(new Dimension(150, 50));
textfield.setBorder(BorderFactory.createEmptyBorder(10, 10, 10, 10));
textfield.setBackground(new Color(204, 153, 255));
textfield.add(Commands);
// Have a button next to the Text Field to clear contents.
// Might need to give the JPanel a new Flow Layout.
// Buttons Panel
buttons = new JPanel();
buttons.setPreferredSize(new Dimension(150, 250));
buttons.setLayout(new BoxLayout(buttons, BoxLayout.Y_AXIS));
buttons.setBackground(new Color(170, 051, 170));
// Create and Add buttons to the Buttons Panel
buttons.add(Box.createRigidArea(new Dimension(30, 10)));
Plot = new JButton("Plot");
Plot.setAlignmentX(Component.CENTER_ALIGNMENT);
Plot.setAlignmentY(Component.CENTER_ALIGNMENT);
buttons.add(Plot);
buttons.add(Box.createRigidArea(new Dimension(30, 10)));
Submit = new JButton("Submit");
Submit.setAlignmentX(Component.CENTER_ALIGNMENT);
Submit.setAlignmentY(Component.CENTER_ALIGNMENT);
Submit.addActionListener(new ActionListener() {
@Override
public void actionPerformed(ActionEvent e) {
sub = new SubmitButton();
sub.Submit(Commands);
cpLabel.setText("*****SET CURRENT POSITION*****");
labels[2][2].setIcon(new ImageIcon(North));
// I will be able to move the rover from here using for loops
// and if statements.
}
});
buttons.add(Submit);
buttons.add(Box.createRigidArea(new Dimension(30, 10)));
Undo = new JButton("Undo");
Undo.setAlignmentX(Component.CENTER_ALIGNMENT);
Undo.setAlignmentY(Component.CENTER_ALIGNMENT);
buttons.add(Undo);
buttons.add(Box.createRigidArea(new Dimension(30, 10)));
gridLines = new JCheckBox();
gridLines.setText("Show gridlines");
gridLines.setAlignmentX(Component.CENTER_ALIGNMENT);
gridLines.setAlignmentY(Component.CENTER_ALIGNMENT);
gridLines.addActionListener(new ActionListener() {
@Override
public void actionPerformed(ActionEvent arg0) {
// Set the colour of the JLabels array from here.
System.out.println("clicked");
}
});
buttons.add(gridLines);
buttons.add(Box.createRigidArea(new Dimension(30, 20)));
loweredetched = BorderFactory
.createEtchedBorder(EtchedBorder.RAISED);
cpLabel = new JLabel("Current position: ", JLabel.CENTER);
cpPanel = new JPanel();
cpPanel.setBackground(new Color(153, 153, 204));
cpPanel.add(cpLabel);
cpPanel.setBorder(loweredetched);
// Panel for the main window
JPanel frame = new JPanel();
frame.setPreferredSize(new Dimension(150, 350));
frame.setLayout(new BorderLayout());
frame.add(textfield, BorderLayout.NORTH);
frame.add(buttons, BorderLayout.CENTER);
// This Main Panel
p.setPreferredSize(new Dimension(350, 700));
p.setBackground(new Color(153, 153, 204));
p.setLayout(new BoxLayout(p, BoxLayout.Y_AXIS));
p.setBorder(BorderFactory.createEmptyBorder(10, 50, 10, 25));
p.add(Box.createRigidArea(new Dimension(100, 100)));
p.add(frame);
p.add(Box.createRigidArea(new Dimension(15, 15)));
p.add(cpPanel);
p.add(Box.createRigidArea(new Dimension(100, 300)));
}
// From line 142 to 202 is everything to do with creating the Grid
public void StringArray(String[][] labelText) {
int x = 1; // increment rows
for (int i = 0; i < labelText.length; i++) { // x
for (int j = 0; j < labelText.length; j++) { // y
labelText[i][j] = Integer.toString(x); // populate string
x++;
}
}
}
public void JLabelArray(JLabel[][] label, String[][] labelText) {
for (int i = 0; i < label.length; i++) { // x
for (int j = 0; j < label.length; j++) { // y
label[i][j] = new JLabel();
label[i][j].setText(labelText[i][j]);
label[i][j].setOpaque(false);
label[i][j].setBorder(BorderFactory.createLineBorder(new Color(
0, 155, 200), 1));
// label[i][j].setBackground(Color.WHITE);
}
}
}
public void populateGrid(JPanel Grid, JLabel[][] label) { // Add Labels to
// Panel,
String x1[][] = new String[rows][columns];
StringArray(x1);
JLabelArray(label, x1);
Grid.setBackground(Color.RED);
int gHeight = label.length, gWidth = label.length;
Grid.setLayout(new GridLayout(gWidth, gHeight));
for (int i = 0; i < label.length; i++) { // x
for (int j = 0; j < label.length; j++) { // y
Grid.add(label[i][j]);
}
}
}
public void createGrid(JPanel finalPanel, JPanel Grid) {
// Add Grid to Scroll Pane
JScrollPane x4 = new JScrollPane(Grid);
x4.setPreferredSize(new Dimension(600, 600)); // DO NOT DELETE THIS.
x4.setHorizontalScrollBarPolicy(JScrollPane.HORIZONTAL_SCROLLBAR_ALWAYS);
x4.setVerticalScrollBarPolicy(JScrollPane.VERTICAL_SCROLLBAR_ALWAYS);
// Add Scroll Pane to another Panel with the Border
finalPanel.setBackground(new Color(153, 153, 204));
finalPanel.setBorder(BorderFactory.createEmptyBorder(50, 25, 50, 50));
finalPanel.add(x4);
}
// Variables for creaeteGUI method.
static MenuPanel t = new MenuPanel();
static JPanel menu = new JPanel();
static JPanel finalPanel = new JPanel();
static JPanel gridPanel = new JPanel();
static JLabel labels[][] = new JLabel[rows][columns];
public static void createGUI() {
t.createMenu(menu);
t.populateGrid(gridPanel, labels);
t.createGrid(finalPanel, gridPanel);
JFrame f = new JFrame();
f.setTitle("Project Testing");
f.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
f.setVisible(true);
f.setLocation(100, 100);
f.setAlwaysOnTop(true);
f.setSize(500, 500);
f.add(finalPanel, BorderLayout.CENTER);
f.add(menu, BorderLayout.WEST);
f.pack();
}
public static void main(String args[]) {
createGUI();
t.ImageLoader();
labels[2][2].setIcon(new ImageIcon(West));
}
}
Thank you so much! I really appreciate any help or suggestions :D
A: As you said what you need to do is to override the paintComponent method of the JPanel and put a drawImage(...) in there. So:
@Override
public void paintComponent(Graphics g)
{
//super.paintComponent(g);
g.drawImage(image, 0, 0, null);
}
Where image is an instance of the class Image that you loaded previously in the initialization code (don't load it in the paintComponent, that would be too slow and you only want to load it once).
There are 2 ways to accomplish that:
*
*Make your own class extending JPanel and put that code there. You probably will want to create also a method setBackgroundImage(Image) that you can call from you main class to pass the image that you loaded from the disk.
*Make an anonymous class, that is doing something similar but without explicitely defining a new class. To do so instead of creating the panel like this:
JPanel gridPanel = new JPanel();
do it like this:
JPanel gridPanel = new JPanel()
{
@Override
public void paintComponent(Graphics g)
{
//super.paintComponent(g);
g.drawImage(image, 0, 0, null);
}
};
Of course you must do this in the actual code (not as an static initialization) since you want to make sure that you load the image before.
Finally a couple of suggestions:
*
*Variable names start in lower case by convention (as opposite to class names that start in upper case). You don't do this for example in the JPanel Grid argument and Comands field.
*You are violating Swing's single threading rule. That is, you must call invokeLater in your main wrapping your GUI initializing code. For example look at Swing's Hello World. You can find a detailed explanation of this here.
| |
doc_1488
|
failed to open dir: Too many open files error
I modified the php conf max_input_nesting_level = 1500 ulimit -n 30000. It didn't help.
[RuntimeException]
[UnexpectedValueException]
RecursiveDirectoryIterator::__construct(/home/realized/public_html/oro/vendor/oro/commerce/src/Oro/Bundle/PaymentBundle/
Resources/translations): failed to open dir: Too many open files
I also tried the solution on : stackoverflow solution. None seem to help me.
Thanks
A: What value ulimit returns?
On local environment I got:
$> ulimit
unlimited
And please reboot your operation system after value of ulimit is changed.
| |
doc_1489
|
/blog/tag/foo
will return a search results for foo.
In the template, I'd like to return a listing of all the posts that are tagged with 'foo', so I've made an MT:Entries block that starts:
<mt:Entries tag="<$mt:SearchString$>">
but it returns no results. However, placing on the page outputs 'foo' just fine.
So I tried this:
<mt:Entries tag="foo">
and it returns all results correctly that are tagged with foo. I'm not seeing a reason why the other one should work -- any ideas?
A: You cannot use a tag as a parameter value. You'll have to pass it via a variable, like so:
<mt:setvarblock name="q"><$mt:SearchString$></mt:setvarblock>
<mt:Entries tag="$q">
A: The reason why <mt:Entries tag="foo"> worked is because you are telling Movable Type to explicitly grab the entries tagged "foo". This is how you should do it in most templates, however the Search Results system template is different.
While the example Francois offers should work, it's not the intended method to get "tag search" results in the Search Results system template.
In the Search Results template, instead of the <mt:Entries> block tag use the <mt:SearchResults> block tag.
You code should look something like this:
<mt:SearchResults>
<mt:IfTagSearch>
<!-- Template tags for "tag search" results -->
</mt:IfTagSearch>
<mt:IfStraightSearch>
<!-- Template tags for "text search" results -->
</mt:IfStraightSearch>
</mt:SearchResults>
For a more detailed example, take a look at the code in the default Search Results template in the "Classic Blog" template set (which ships with Movable Type) and modify the working (and tested) code.
| |
doc_1490
|
Querying a nullable @OneToOne relationship with JPA
How can I filter the null values of the non-SQL attribute of a @OneToOne relationship on the mappedBy side. The cited question dealt with the @JoinColumn side.
A: SELECT e2
FROM Entity1 e1 RIGHT OUTER JOIN e1.entity2 e2
WHERE e1 IS NULL
| |
doc_1491
|
This is a small part of the code below:
this.year = (isNaN(year) || year == null) ? calCurrent.getFullYear() : year;
A: What you are referring to is the ternary operator which is an inline conditional statement. To illustrate:
this.year = (isNaN(year) || year == null) ? calCurrent.getFullYear() : year;
is equivalent to
if(isNaN(year) || year == null){
this.year=calCurrent.getFullYear()
}
else{
this.year=year;
}
| |
doc_1492
|
I have a table with date and ip. I want to pull the count of new non-repetitive distinct ips for each hour, instead of the count of distinct ip aggregated hour
My table looks like
Date Time IP
07-01-2018 08:00:00 1.1.1
07-02-2018 09:00:00 1.1.1
07-02-2018 10:00:00 2.2.2
07-03-2018 11:00:00 1.1.1
07-03-2018 12:00:00 2.2.2
07-03-2018 13:00:00 3.3.3
Then on
07-01-2018 08:00:00, distinct count of ip should be 1 (1.1.1)
07-02-2018 09:00:00, new distinct count of ip should be 0
07-02-2018 10:00:00, new distinct count of ip should be 1 (2.2.2)
07-03-2018 11:00:00, new distinct count of ip should be 0
07-03-2018 12:00:00, new distinct count of ip should be 0
07-03-2018 13:00:00, new distinct count of ip should be 1 (3.3.3)
The result I expect to be:
Date Hour Counts
07-01-2018 08:00:00 1
07-02-2018 09:00:00 0
07-02-2018 10:00:00 1
07-03-2018 11:00:00 0
07-03-2018 12:00:00 0
07-03-2018 13:00:00 1
Vlam's query works for unique counts by each day,
SELECT aa.date, count(distinct aa.ip) as count_ips FROM table1 aa
where not exists
(select bb.ip from table1 bb where bb.ip = aa.ip and bb.date < aa.date)
group by aa.date order by aa.date asc;
Now I would like to break down to each hour. Any advice?
Thanks in advance!
A: I have modified the original SQL and just added a filtering to remove any IP addresses that have appeared before the date in the row since only NEW IP addresses should be counted.
SELECT aa.date, count(distinct aa.ip) as count_ips FROM table1 aa
where not exists
(select bb.ip from table1 bb where bb.ip = aa.ip and bb.date < aa.date)
group by aa.date order by aa.date asc;
| |
doc_1493
|
A: I have a tutorial written for this a while back. You can get it on my blog here, it's lengthy so I won't repost it here
A: The signpost solution is a lot of extra work.
There's a way to do it wither Twitter4j that only takes a few lines of code. The solution comes from Stephan, here.
I have an example app using his approach, here.
| |
doc_1494
|
Not sure what I am doing wrong.
main urls.py
urlpatterns = [
path('', include('blogging_logs.urls', namespace='blogging_logs')),
path('users/', include('users.urls', namespace='users')),
path('admin/', admin.site.urls),
]
app: blogging_logs: base.html
<p>
<a href="{% url 'blogging_logs:index' %}">Home</a>
<a href="{% url 'blogging_logs:categories' %}">Categories</a>
{% if user.is_authenticated %}
Hello, {{ user.username }}.
{% else %}
<a href="{% url 'users:login' %}"> login in</a>
{% endif %}
</p>
{% block content %}{% endblock content %}
app: users: urls.py
from django.contrib import admin
from django.urls import re_path, path
from django.contrib.auth import authenticate, login
from django.contrib.auth import views as auth_views
app_name = 'users'
urlpatterns = [
# Login Page
path('login/', auth_views.LoginView.as_view(template_name='login.html')),
]
app:users: login.html
{% extends "blogging_logs/base.html" %}
{% block content %}
{% if form.errors %}
<p>Your username and password din't match. Please try again.</p>
{% endif %}
<form class="" method="post" action="{% url 'users:login' %}" >
{% csrf_token %}
{{ form.as_p }}
<button name='submit'> Login in </button>
<input type="hidden" name="next" value="{% url 'blogging_logs:index' %}" />
</form>
{% endblock content %}
A: You are missing the name argument for the path function:
Change the following line
path('login/', auth_views.LoginView.as_view(template_name='login.html')),
to
path('login/', auth_views.LoginView.as_view(template_name='login.html'), name='login'),
| |
doc_1495
|
"Leonard, A., Fraternali, F., Daraio, C."
Now, there are three people in this string, and I would like to find the best way to obtain these three people, given that the string could also be sometimes:
"Leonard A., Fraternali F., Daraio C.",
i.e., without the commas. Before I had a function as follows:
def tokenize(str, token=','):
return [x for x in re.split(r'\s*%s\s*' % token,str) if x]
But of course this doesn't work in the first case.
Thanks!
A: Maybe this will do
In [10]: re.split(r'\.,', "Leonard A., Fraternali F., Daraio C.")
Out[10]: ['Leonard A', ' Fraternali F', ' Daraio C.']
In [11]: re.split(r'\.,', "Leonard, A., Fraternali, F., Daraio, C.")
Out[11]: ['Leonard, A', ' Fraternali, F', ' Daraio, C.']
A: Is this what you want?
def tokenize(line, token=','):
splitline = line.split(token)
names = []
for name in splitline:
name = name.strip()
if len(name.replace(".", "") ) == 1:
try:
names[-1] = '%s %s' % (names[-1], name)
continue
except IndexError:
pass
names.append(name)
return names
In: tokenize("Leonard A., Fraternali F., Daraio C.")
Out: ['Leonard A.', 'Fraternali F.', 'Daraio C.']
In: tokenize("Leonard, A., Fraternali, F., Daraio, C.")
Out: ['Leonard A.', 'Fraternali F.', 'Daraio C.']
A: OK, if your names all end up with a dot ., then this would work:
>>> names = "Leonard A., Fraternali F., Daraio C.".split('.')
>>> names
>>> ['Leonard A', ', Fraternali F', ', Daraio C', '']
>>> names = [name.strip(', ') for name in names if name]
>>> names
['Leonard A', 'Fraternali F', 'Daraio C']
| |
doc_1496
|
My application should assure (as much as possible) that those media files are not accessible, copied or manipulated.
Which strategies could I follow? Crypt files in file system and decrypt in memory before showing or playing them? Keep them into SQL Lite as CLOBs? Is this SQL Lite accessible from other apps or is it hidden for the rest of apps? Any other ideas? I haven't found too much info about this "issue" on the web.
Thanks in advance,
Chemi.
A: I suggest saving these files to the SD card, not to the private file of your Activity, as images/audio files are usually quite big (I have seen in this discussion that you are planning to handle 400 MB, is this the same app?). So crypting should be fine, and more straightforward than SQLite.
The class below allows encrypting bytes to binary files:
import java.io.File;
import java.io.FileInputStream;
import java.io.FileOutputStream;
import java.io.IOException;
import java.security.SecureRandom;
import javax.crypto.Cipher;
import javax.crypto.KeyGenerator;
import javax.crypto.SecretKey;
import javax.crypto.spec.SecretKeySpec;
public class AESEncrypter {
public static void encryptToBinaryFile(String password, byte[] bytes, File file) throws EncrypterException {
try {
final byte[] rawKey = getRawKey(password.getBytes());
final FileOutputStream ostream = new FileOutputStream(file, false);
ostream.write(encrypt(rawKey, bytes));
ostream.flush();
ostream.close();
} catch (IOException e) {
throw new EncrypterException(e);
}
}
public static byte[] decryptFromBinaryFile(String password, File file) throws EncrypterException {
try {
final byte[] rawKey = getRawKey(password.getBytes());
final FileInputStream istream = new FileInputStream(file);
final byte[] buffer = new byte[(int)file.length()];
istream.read(buffer);
return decrypt(rawKey, buffer);
} catch (IOException e) {
throw new EncrypterException(e);
}
}
private static byte[] getRawKey(byte[] seed) throws EncrypterException {
try {
final KeyGenerator kgen = KeyGenerator.getInstance("AES");
final SecureRandom sr = SecureRandom.getInstance("SHA1PRNG");
sr.setSeed(seed);
kgen.init(128, sr); // 192 and 256 bits may not be available
final SecretKey skey = kgen.generateKey();
return skey.getEncoded();
} catch (Exception e) {
throw new EncrypterException(e);
}
}
private static byte[] encrypt(byte[] raw, byte[] clear) throws EncrypterException {
try {
final SecretKeySpec skeySpec = new SecretKeySpec(raw, "AES");
final Cipher cipher = Cipher.getInstance("AES");
cipher.init(Cipher.ENCRYPT_MODE, skeySpec);
return cipher.doFinal(clear);
} catch (Exception e) {
throw new EncrypterException(e);
}
}
private static byte[] decrypt(byte[] raw, byte[] encrypted) throws EncrypterException {
SecretKeySpec skeySpec = new SecretKeySpec(raw, "AES");
try {
final Cipher cipher = Cipher.getInstance("AES");
cipher.init(Cipher.DECRYPT_MODE, skeySpec);
return cipher.doFinal(encrypted);
} catch (Exception e) {
throw new EncrypterException(e);
}
}
}
You will also need this Exception class:
public class EncrypterException extends Exception {
public EncrypterException ( ) { super( ); }
public EncrypterException (String str ) { super(str); }
public EncrypterException (Throwable e) { super(e); }
}
Then, you just have to use what follows to generate encrypted files:
encryptToBinaryFile("password", bytesToSaveEncrypted, encryptedFileToSaveTo);
And in your app, you can read them the following way:
byte [] clearData = decryptFromBinaryFiles("password", encryptedFileToReadFrom);
To use an hardcoded password can be hacked by digging into the obfuscated code and looking for strings. I don't know whether this would be a sufficient security level in your case?
If not, you can store the password in your Activity's private preferences or using tricks such as this.class.getDeclaredMethods()[n].getName() as a password. This is more difficult to find.
About performances, you have to know that crypting / decrypting can take quite a long time. This requires some testing.
[EDIT: 04-25-2014] There was a big mistake in my answer. This implementation is seeding SecureRandom, which is bad ('evil', some would say).
There is an easy way to circumvent this issue. It is explained in details here in the Android Developers blog. Sorry about that.
A: Similar problem I have resolved using SQLite BLOBs. You can try application in AndroidMarket
Actually I'm saving media files to series of BLOBs. SQLite supports BLOB size up to 2 gigs, but due to Android limitations I was forced to chunk files to 4 meg BLOBs.
The rest is easy.
| |
doc_1497
|
final resp = await http.get(Uri.parse(url));
File file = File(targetPath);
await file.create(recursive: true);
await file.writeAsBytes(resp.bodyBytes);
Because the videos that will be downloaded have a duration of 90 min they are large which makes this solution inappropriate.
My goal is to have a stream of incoming bytes which are piped into a buffer and write every 4 kB to my file.
I tried using a StreamedResponse, like this:
Future<void> downloadResource(String url, String targetPath) async {
File file = File(targetPath);
final req = http.Request("GET", Uri.parse(url));
req.send().asStream().listen(onData);
}
void onData(http.StreamedResponse event) async {
final newBytes = event.stream.toBytes(); //Type: Future<Uint8List>
//process incoming bytes
}
I don't know how to get the incoming bytes, so getting the raw data that is coming in right now. I frist thought StreamedResponse would give me any access to this data, but it is basically the same as before with only Future Operations.
So if you have a solution or any hint how to achive my goal, I would be very thankful for sharing!
| |
doc_1498
|
arr1=np.zeros([5,8])
arr2=np.ones([4,10])
I would like to put arr2 into arr1 either by cutting off excess lengths in some dimensions, or filling missing length with zeros.
I have tried:
arr1[exec(str(",:"*len([arr1.shape]))[1:])]=arr2[exec(str(",:"*len([arr2.shape]))[1:])]
which is basically the same as
arr1[:,:]=arr2[:,:]
I would like to do this preferably in one line and without "for" loops.
A: You could use this :
arr1[:min(arr1.shape[0], arr2.shape[0]), :min(arr1.shape[1], arr2.shape[1])]=arr2[:min(arr1.shape[0], arr2.shape[0]), :min(arr1.shape[1], arr2.shape[1])]
without any for loop.
It's the same concept you applied in second try, but with a condition to choose minimum length.
A: I solved this by coming up with the following. I used slice() as @hpaulj suggested. Considering I want to assign ph10 (an array) to ph14 (an array of zeros of size bound1):
ph14=np.zeros(bound1)
ph10=np.array(list1)
ind_min=np.min([ph14.shape,ph10.shape],0)
ph24=[]
for n2 in range(0,len(ind_min.shape)):
ph24=ph24+[slice(0,ind_min[n2])]
ph14[ph24]=ph10[ph24]
| |
doc_1499
|
java.lang.IllegalStateException: The specified child already has a parent. You must call removeView() on the child's parent first
Here is the code for creating list list of fragments to pass to the pageAdapter.
private Vector<Fragment> getFragments() {
Vector<Fragment> list = new Vector<Fragment>();
list.add(new Fragment1());
list.add(new Fragment2());
list.add(new Fragment3());
return list;
Each of the fragments were essentially the same except for a being created with different layouts. Here was the original code I had for one of the fragments.
public class Fragment1 extends Fragment {
@Override
public View onCreateView(LayoutInflater inflater, ViewGroup container,
Bundle savedInstanceState) {
View v = getActivity().getLayoutInflater().inflate(R.layout.fragment1, container);
return v;
}
}
But when I ran it like this it kept crashing with the IllegalStateException. I found that the issue was coming from the fragments being created. After some Googling I tried changing the code for the fragment to this.
public class Fragment1 extends Fragment {
@Override
public View onCreateView(LayoutInflater inflater, ViewGroup container,
Bundle savedInstanceState) {
View v = getActivity().getLayoutInflater().inflate(R.layout.fragment1, container, false);
return v;
}
}
This resolved the issue and I no longer got the IllegalStateException except I have no idea how this worked. What exactly does this boolean do? And what does this exception mean? I had tried adding the method call like it suggusted but that did not resolve it. Also I tried changing this boolean to true and I got the same error again. The android docs say its attachToRoot but isn't that what I want to do? Attach my 3 fragments to the rootview, which is the viewpager? If anyone could explain this it would be greatly appreciated.
A: The boolean parameter of the 3-arg version of LayoutInflater.inflate() determines whether the LayoutInflater will add the inflated view to the specified container. For fragments, you should specify false because the Fragment itself will add the returned View to the container. If you pass true or use the 2-arg method, the LayoutInflater will add the view to the container, then the Fragment will try to do so again later, which results in the IllegalStateException.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.