Expanding a bit on Mark s answer...
Yes, you can use the manipulation and inertia API s to accomplish this, see this overview page.
A while back I created my own very basic scatterview control that essentially did what scatterview does, but with the following limitations:
- Only one child, so it works more like a Border
- No default visual appearance or special behavior of the child item like SV does
One problem that hit me while developing this is that you have to make your container control occupy a larger area than the actual child. Otherwise, you will not be able to reliably capture the contact events when the fingers are outside your manipulated object. What I did was making my container fullscreen (1024x768) and be transparent and it works ok for my case.
For the manipulation itself, you use an instance of the Affine2DManipulationProcessor class. This class requires you to start the manipulation and then it will constantly feed you with delta event (Affine2DManipulationDelta
).
If you want your manipulation to have a more real behavior after the user releases their fingers, you will use the Affine2DInertiaProcessor class which works in a similar way to the manipulation processor. The basic setup is that as soon as the manipulation processor is done (user released fingers) you tell the inertia processor to start.
Let s look at some code :) Here s my setup method in my container class:
private void InitializeManipulationInertiaProcessors()
{
manipulationProcessor = new Affine2DManipulationProcessor(
Affine2DManipulations.TranslateY | Affine2DManipulations.TranslateX |
Affine2DManipulations.Rotate | Affine2DManipulations.Scale,
this);
manipulationProcessor.Affine2DManipulationCompleted += new EventHandler<Affine2DOperationCompletedEventArgs>(processor_Affine2DManipulationCompleted);
manipulationProcessor.Affine2DManipulationDelta += new EventHandler<Affine2DOperationDeltaEventArgs>(processor_Affine2DManipulationDelta);
inertiaProcessor = new Affine2DInertiaProcessor();
inertiaProcessor.Affine2DInertiaDelta += new EventHandler<Affine2DOperationDeltaEventArgs>(processor_Affine2DManipulationDelta);
}
To start it all, I trap ContactDown in my container class:
protected override void OnContactDown(ContactEventArgs e)
{
base.OnContactDown(e);
e.Contact.Capture(this);
// Tell the manipulation processor what contact to track and it will
// start sending manipulation delta events based on user motion.
manipulationProcessor.BeginTrack(e.Contact);
e.Handled = true;
}
That s all, now sit back and let the manipulation processor do its work. Whenever it has new data it will raise the delta event (happens several times / second while user moves the fingers). Note that it is up to you as a consumer of the processor to do something with the values. It will only tell you things like "user has applied a rotation of X degrees" or "user moved finger X,Y pixels". What you typically do then is to forward these values to rendertransforms to actually show the user what has happened.
In my case, my child object has three hard coded rendertransforms: "translate", "rotate" and "scale" which I update with the values from the processor. I also do some boundary checking to make sure the object isn t moved outside the surface or scaled too large or too small:
/// <summary>
/// This is called whenever the manipulator or the inertia processor has calculated a new position
/// </summary>
/// <param name="sender">The processor who caused the change</param>
/// <param name="e">Event arguments containing the calculations</param>
void processor_Affine2DManipulationDelta(object sender, Affine2DOperationDeltaEventArgs e)
{
var x = translate.X + e.Delta.X;
var y = translate.Y + e.Delta.Y;
if (sender is Affine2DManipulationProcessor)
{
// Make sure we don t move outside the screen
// Inertia processor does this automatically so only adjust if sender is manipulation processor
y = Math.Max(0, Math.Min(y, this.ActualHeight - box.RenderSize.Height));
x = Math.Max(0, Math.Min(x, this.ActualWidth - box.RenderSize.Width));
}
translate.X = x;
translate.Y = y;
rotate.Angle += e.RotationDelta;
var newScale = scale.ScaleX * e.ScaleDelta;
Console.WriteLine("Scale delta: " + e.ScaleDelta + " Rotate delta: " + e.RotationDelta);
newScale = Math.Max(0.3, Math.Min(newScale, 3));
scale.ScaleY = scale.ScaleX = newScale;
}
One thing to notice here is that both the manipulation and the inertia processor uses the same callback for delta events.
The final piece of the puzzle is when the user releases the finger and I want to start the inertia processor:
/// <summary>
/// This is called when the manipulator has completed (i.e. user released contacts) and we let inertia take over movement to make a natural slow down
/// </summary>
void processor_Affine2DManipulationCompleted(object sender, Affine2DOperationCompletedEventArgs e)
{
inertiaProcessor.InitialOrigin = e.ManipulationOrigin;
// Set the deceleration rates. Smaller number means less friction (i.e. longer time before it stops)
inertiaProcessor.DesiredAngularDeceleration = .0010;
inertiaProcessor.DesiredDeceleration = .0010;
inertiaProcessor.DesiredExpansionDeceleration = .0010;
inertiaProcessor.Bounds = new Thickness(0, 0, this.ActualWidth, this.ActualHeight);
inertiaProcessor.ElasticMargin = new Thickness(20);
// Set the initial values.
inertiaProcessor.InitialVelocity = e.Velocity;
inertiaProcessor.InitialExpansionVelocity = e.ExpansionVelocity;
inertiaProcessor.InitialAngularVelocity = e.AngularVelocity;
// Start the inertia.
inertiaProcessor.Begin();
}