Optimizing C# for XAML Platforms

Georgi Atanasov and Tsvyatko Konov

When working with the C# development language, individual developers often find a process that works for them and stick to it. After all, if the code passes all our tests, we can ship it, right? Well, no. Releasing products in today’s complex programming landscape isn’t that simple. We think developers need to revisit some of the standard decisions we make about Extensible Application Markup Language (XAML) concepts such as dependency properties, LINQ and the layout system. When we examine these aspects from a performance perspective, various approaches can prove questionable.

By exploring dependency properties, LINQ performance and the layout system through some code examples, we can see exactly how they work and how we can get the best performance out of our applications by rethinking some common assumptions.

The Problem with Dependency Property Look-Up Time

DependencyProperty and DependencyObject are the fundamentals on which Windows Presentation Foundation (WPF), Silverlight and XAML are built. These building blocks provide access to critical features such as styling, binding, declarative UI and animation. In a typical program, we use them all the time. Every single bit of such high-end functionality comes at a price measured in performance, however, be it loading time, rendering speed or the application’s memory footprint. To support framework functionality that includes default values, styles, bindings, animations or even value coercion in WPF, the property system backing them up needs to be more complex than standard CLR properties.

The following steps occur during a DependencyProperty effective value look-up:

  • The structure holding the data for the specified property is retrieved from the property store.
  • Once the structure is retrieved, its effective value is evaluated — is it a default value, a style, a binding or an animated value?
  • The final effective value is returned.

Figure 1 shows some measurements (in milliseconds) of CLR properties and DependencyProperty usage.

100,000 Iterations CLR Properties DependencyProperty
Set different values 3 ms 1062 ms
Set same value 3 ms 986 ms
Get value 3 ms 154 ms

Figure 1 DependencyProperty Get/Set Measurements

Note All measurements were performed on the Silverlight for Windows Phone platform, on a Samsung Omnia 7 device. We used a mobile device because of its lower hardware resources, where differences are more distinct.

Figure 2 shows the class used to perform the tests.

Figure 2 Simple “Control” Inheritor

  1. public class TestControl : Control
  2. {
  3.     public static readonly DependencyProperty TestIntProperty =
  4.         DependencyProperty.Register(“TestInt”, typeof(int), typeof(TestControl), new PropertyMetadata(0));
  5.     public int TestInt
  6.     {
  7.         get
  8.         {
  9.             return (int)this.GetValue(TestIntProperty);
  10.         }
  11.         set
  12.         {
  13.             this.SetValue(TestIntProperty, value);
  14.         }
  15.     }
  16. }

This test is the simplest one possible, without any styles, bindings or animations applied. If you try the same scenario on a ListBox, you’ll see even bigger numbers. It demonstrates that DependencyProperty usage is heavier and implies that performance pitfalls can result in enormous overhead. In applications that use extensive looping, this performance hit is even more apparent.

Solving the Dependency Property Look-Up Time Problem

The challenge is to keep all the value provided by the dependency property system and to improve the look-up performance at the same time. Two important yet simple optimizations can help improve the overall application performance.

Cache a DependencyProperty effective value in a member variable for later use

Figure 3 shows an extended version of the TestControl class.

Figure 3 Simple “Control” Inheritor with a Cached Property Value

  1. public class TestControl : Control
  2. {
  3.     public static readonly DependencyProperty TestIntProperty =
  4.         DependencyProperty.Register(“TestInt”, typeof(int), typeof(TestControl), new PropertyMetadata(0, OnTestIntChanged));
  5.     private int testIntCache;
  6.     public int TestInt
  7.     {
  8.         get
  9.         {
  10.             return this.testIntCache;
  11.         }
  12.         set
  13.         {
  14.             if (this.testIntCache != value)
  15.             {
  16.                 this.SetValue(TestIntProperty, value);
  17.             }
  18.         }
  19.     }
  20.     private static void OnTestIntChanged(DependencyObject d, DependencyPropertyChangedEventArgs e)
  21.     {
  22.         (d as TestControl).testIntCache = (int)e.NewValue;
  23.     }
  24. }

By adding a handler to listen for a change in the TestInt property and caching it in a field and in the getter, we return this cached field. When setting the value, we also check whether the value is the same — if it’s not, the SetValue method isn’t called. The results when we perform the above measurements with the simple changes are shown in Figure 4.

100,000 Iterations CLR Properties DependencyProperty
Set different values 3 ms 1062 ms
Set same value 3 ms 4 ms
Get value 3 ms 3 ms

Figure 4 DependencyProperty Get/Set Measurements Using the Class from Figure 3

Only five lines of additional code resulted in a significant optimization. This optimization comes at a price, however. You sacrifice memory footprint (adding four more bytes to the object’s size with this additional field) for the sake of performance. The user probably won’t notice a slightly larger memory footprint but would definitely be aware of slower performance. The developer is responsible for evaluating the impact of either approach. If you have many dependency properties within your classes and you create many instances of these classes, more bytes within a single object can become a problem. If you use DependencyProperty sparingly, you don’t need to cache its effective value in a field.

Note Be careful when adding a Changed handler for a property. It will force the underlying framework to synchronize the property value with the UI thread, which can downgrade performance for properties whose values are animated. Also keep in mind that the “if” check in the property setter won’t work with bindings because the framework internally uses the SetValue(TestIntProperty, value) method rather than the property setter.

Cache a DependencyProperty effective value outside a loop

The example in Figure 3 works because we have our own class that we can modify as desired. But what if we have to use a DependencyProperty from an external library and we don’t have access to its source? We can handle this with another simple yet efficient optimization. Consider the following code:

  1. for (int i = 0; i < 100000; i++)
  2. {
  3.     if (this.ActualWidth == i) // “this” refers to a PhoneApplicationPage instance
  4.     {
  5.         // perform some action
  6.     }
  7.     else
  8.     {
  9.         // perform other action
  10.     }
  11. }

Do you see something that can be written more efficiently? Here is a slightly modified version of the preceding loop:

  1. double actualWidth = this.ActualWidth; // “this” refers to a PhoneApplicationPage instance
  2. for (int i = 0; i < 100000; i++)
  3. {
  4.     if (actualWidth == i)
  5.     {
  6.         // perform some action
  7.     }
  8.     else
  9.     {
  10.         // perform other action
  11.     }
  12. }

With this modified approach, we look up the value of the property only once and then use the value, cached in a local variable, to perform the “if” clause. The optimized results are shown in Figure 5.

100,000 Iterations Loop Time Elapsed
Before optimization 750 ms
After optimization 4 ms

Figure 5 Comparison of Loop Performance

Pretty impressive! This optimization is valid only if the ActualWidth value isn’t changed during the loop. If there’s a condition that could change this value, you need to update the variable upon the change or look it up every time if you don’t know when the change will occur.

Some final thoughts about DependencyProperty caching

Dependency properties are great, and they help make XAML the powerful framework it is. But don’t forget that in some cases they can degrade performance significantly. Always keep in mind the overhead the dependency property system brings into setting/getting a value and use the preceding tricks when appropriate. Estimate which properties need to be dependency ones and which can be simple CLR properties. For example, if you have a getter-only property, you don’t need to register a DependencyProperty for it, but rather a CLR property, and to raise the PropertyChanged notification to enable bindings.

Efficient Looping With — and Without — LINQ

Since LINQ was first released in 2007 as part of .NET Framework 3.5, developers seldom write their own loops anymore, relying on LINQ instead. LINQ is powerful, and its beauty is that it can be executed against different providers, such as Microsoft SQL Server, in-memory objects, XML or even your own custom provider implementing the IQueryable interface. We love LINQ—its framework libraries often spare us from writing many lines of code. Sometimes, however, we still have to write our own loops for the sake of performance. By properly estimating our algorithm complexity, we can recognize whether we can use LINQ or need our own loops.

For example, we can write code to solve a simple problem. Let’s say we need to find the minimum, maximum and average of an array of integers. As shown in Figure 6, using LINQ makes it simple and clean.

Figure 6 Finding Min/Max/Average Using LINQ

  1. private double[] FindMinMaxAverage(List<int> items)
  2. {
  3.     return new double[] { items.Min(), items.Max(), items.Average() };
  4. }

Using our own loop, the code would look like Figure 7. (Yes, there is much more code.)

Figure 7 Finding Min/Max/Average of a Sequence Using a Custom Loop

  1. private double[] FindMinMaxAverage(List<int> items)
  2. {
  3.     if (items.Count == 0)
  4.     {
  5.         return new double[] { 0d, 0d, 0d };
  6.         // we may throw an exception if appropriate
  7.         // throw new ArgumentException(“items array is empty”);
  8.     }
  9.     double min = items[0];
  10.     double max = min;
  11.     double sum = min;
  12.     for (int i = 1; i < items.Count; i++)
  13.     {
  14.         if (items[i] < min)
  15.         {
  16.             min = items[i];
  17.         }
  18.         else if (items[i] > max)
  19.         {
  20.             max = items[i];
  21.         }
  22.         sum += items[i];
  23.     }
  24.     return new double[] { min, max, sum / items.Count };
  25. }

When you compare the two routines, you can see the time advantage of our loop over LINQ, as shown in Figure 8.

Method Time Elapsed
LINQ 60 ms
Loop 20 ms

Figure 8 Execution Time of Methods in Figure 6 and Figure 7

Are you surprised by the results? The LINQ implementation is highly efficient, and we couldn’t write a better performing loop to find the minimum of a sequence. But finding the minimum, maximum and average requires the LINQ approach to loop through the entire sequence three times. So the loop we wrote is three-times faster because of the difference in complexity — the complexity of our loop is O (n) while the complexity of the method that uses LINQ is O (3n).

To LINQ or Not To LINQ

Figure 6 is a simple example that demonstrates the importance of complexity estimation in writing efficient code. We should always evaluate different solutions, analyze their pros and cons and decide which one to use in a particular context.

For example, if this method is used once or twice in the application, we obviously don’t need to write more code. But if it is used extensively, for example, in a charting engine, we should definitely opt for the second solution because it improves performance 200% per single method call.

A LINQ extension method generally has O (n) or O (nlogn) complexity, depending on the method. It also has some optimizations. For example, the Count method checks whether the sequence is ICollection; if yes, it returns its Count property directly, making the complexity a constant — O (1). Because of its lazy initialization, it can merge queries, reducing the complexity.

OrderBy uses the same custom QuickSort implementation as in the List<T>.Sort method. We tried our own GroupBy implementation and gained only several milliseconds compared to LINQ. In this case, why would we bother writing our own grouping algorithm when LINQ already implements one efficiently?  You always need to measure the execution time of the methods you write, estimate how often the methods will be used and figure out whether you can write a better implementation.

A more complicated example with some XAML code illustrates where using LINQ adds enormous complexity. Consider the code in Figure 9.

Figure 9 Sample Code with Extensive LINQ Usage

  1. private void UpdateRelatedEntries(List<LinqViewModel> items)
  2. {
  3.     foreach (LinqViewModel item in items)
  4.     {
  5.         item.Related = items.Where(s => s != item)
  6.                             .OrderBy(s => s.Score).Reverse()
  7.                             .Take(5)
  8.                             .ToList();
  9.     }
  10. }

Can you guess the complexity of this “simple” code? Let’s analyze it:

foreach -> O (n)

  • Items.Where(s => s != item) -> O (n – 1)
  • OrderBy(s => s.Score) – > O ((n – 1)log(n – 1)) (quicksort)
  • Reverse() -> constant, will be merged with Take
  • Take(5) -> constant
  • ToList() -> constant since Array.Copy is used internally and only the time for memory allocation is spent

The overall complexity is O (n * ((n – 1) + (n – 1) * log (n – 1)). This approach provides more than a quadratic complexity, which is unacceptable and can be a show stopper in your application.

Now imagine adding a Select clause with some additional LINQ methods within its body. The complexity would be near cubical. Measuring the execution time of this method with our 100,000 entries results in a mind-blowing five-minute (and still running) freeze on our Omnia 7 screen. Some of you might be thinking that it isn’t realistic to experiment with 100,000 entries on a mobile device. And you’re right — that’s a lot of data even for a desktop app. But the point remains that the complexity of this algorithm increases performance time exponentially.

Figure 10 shows the results from tests using 100, 1000 and 5000 items — numbers more realistic for a mobile app.

Items Time Elapsed
100 45 ms
1000 4222 ms
5000 138829  ms == ~2.5 minutes

Figure 10 Execution Times for the Method in Figure 9

Optimizing the loop

The algorithm definitely needs to be improved. For starters, why do we need to do an expensive sort operation upon each iteration? We could just sort the items in descendant order once, outside the loop. As shown in Figure 11, this approach automatically removes the inner OrderBy and Reverse calls.

Figure 11 Optimization of the Method in Figure 9

  1. private void UpdateRelatedEntries(List<LinqViewModel> items)
  2. {
  3.     List<LinqViewModel> sortedDescendantItems = items.OrderBy(item => item.Score).Reverse().ToList();
  4.     foreach (LinqViewModel item in items)
  5.     {
  6.         item.Related = sortedDescendantItems.Where(s => s != item)
  7.                             // .OrderBy(s => s.Score).Reverse()
  8.                             .Take(5)
  9.                             .ToList();
  10.     }
  11. }

You might be wondering why we copy the result of the query in a list. The reason is a bit tricky. Because LINQ uses lazy initialization, the quick sort of the OrderBy call is performed when the iteration starts. If the result isn’t copied to a list (making the query iterate once), the quick sort is performed upon each iteration, even when the query is made outside the loop. Another consequence of lazy initialization is that the Take(5) and Where(…) clauses will be merged, making the overall execution time inside the loop constant.

Figure 12 shows what happens when we do the measurements with this optimization.

Items Time Elapsed
100 13 ms
1000 29 ms
5000 131 ms

Figure 12 Execution Times of the Method from Figure 11

That’s much better — from an exponential complexity, we made it to ~O (nlogn). This is considered good in computer science. Still, if this were our code, we wouldn’t use LINQ in the loop’s body. Instead, we would write our own inner loop, add the first five items and then break the loop, as shown in Figure 13.

Figure 13 The Method from Figure 9 Implemented Using our Own Loops

  1. private void UpdateRelatedEntries(List<LinqViewModel> items)
  2. {
  3.     List<LinqViewModel> sortedDescendantItems = items.OrderBy(item => item.Score).Reverse().ToList();
  4.     for (int i = 0; i < items.Count; i++)
  5.     {
  6.         LinqViewModel item = items[i];
  7.         List<LinqViewModel> relatedItems = new List<LinqViewModel>(8);
  8.         for (int j = 0; j < sortedDescendantItems.Count; j++)
  9.         {
  10.             if (sortedDescendantItems[j] == item)
  11.             {
  12.                 continue;
  13.             }
  14.             relatedItems.Add(sortedDescendantItems[j]);
  15.             if (relatedItems.Count == 5)
  16.             {
  17.                 break;
  18.             }
  19.         }
  20.         item.Related = relatedItems;
  21.     }
  22. }

Time for measurements again. The results are shown in Figure 14.

Items Time Elapsed
100 6 ms
1000 13 ms
5000 57 ms

Figure 14 Execution Times of the Method from Figure 13

Ah, the good old-fashioned loop — nothing is faster. If you prefer LINQ and plan to use the method from Figure 9 only rarely, you’ll be fine. If you plan to use it extensively, however, you’re better off writing your own loop, which will complete its work in half the time.

Some final thoughts on looping with LINQ

LINQ is efficient, saves you from writing a lot of code, and is neat, clean and easily read. It also allows you to execute queries against different providers. But as demonstrated in Figure 9, it can also add undesired complexity and degrade performance. You’ll get into trouble if you think of LINQ as a single method call with constant execution time rather than the shortcut to different algorithms that it is. The complexity involved in looping is what matters. In some cases, you still need to write your own loops instead of relying on LINQ.

Working with the XAML Layout System

To create the page layout in the XAML layout system, each container is measured and then arranged. While being measured, each container recursively calls its children and asks for the desired size. Then all the elements are arranged on the available rectangle. This process happens through the corresponding methods MeasureOverride and ArrangeOverride.

Before going further into the layout system code, let’s look at the main layout panels, their capabilities and their implementation. Canvas, StackPanel and Grid are some of the layout panels you’ll be working with most often. Let’s also consider virtualization. We’ll show you how to use one of its applications in the face of VirtualizingStackPanel.


Canvas is used to arrange items on one panel based on absolute offset. Canvas performs better than Grid and StackPanel because each child measure doesn’t rely on the desired size of other children. In most cases, however, the differences among Canvas, Grid and StackPanel are relatively small since each measures all their children. This control is suitable when its children rely only on its absolute position (e.g., simple designer/drawing tool). By default, children don’t rely on each other for their position. If the scenario requires children to depend on each other for their size and translate transform is applied to mimic a grid/list-like layout, you should consider other layout panels or custom implementation.


StackPanel is used to arrange child elements into a line that can be oriented horizontally or vertically. It provides functionality common for lists. StackPanel measures all items even if some of them aren’t visible. If you need the panel of an itemsControl to display many elements, consider using virtualized panels instead of StackPanel.


Grid is a panel used to arrange elements in a grid-like layout with both column and row definitions. Compared to Canvas and StackPanel, Grid has a heavier measure cycle. It is the default panel used for new UserControls and Windows.

Virtualization (VirtualizingStackPanel)

In most scenarios where performance is an issue, panels are used within ItemsControl for realizing multiple items. In these situations, the best approach is to provide a panel that supports virtualization, such as VirtualizingStackPanel. With this approach, the elements realized in the visual tree are limited to the items currently visible, cutting the time for measure/arrange.

Comparing the panels

We can create a real scenario in which panels are used as the container for an ItemsControl such as ListBox. Results will be reviewed for loading. The measurements show the duration of the layout cycle (Arrange + Measure). The code involved will be similar to that shown in Figure 15.

Figure 15 Code for Canvas, StackPanel, Grid and VirtualizingStackPanel


<UserControl.Resources>        <Style TargetType="ListBoxItem">               <Setter Property="Canvas.Left" Value="10"></Setter>        </Style> </UserControl.Resources> <ListBox  ItemsSource="{Binding Data}">        <ListBox.ItemsPanel>               <ItemsPanelTemplate>                     <Canvas>                     </Canvas>               </ItemsPanelTemplate>        </ListBox.ItemsPanel> </ListBox>


<ListBox  ItemsSource="{Binding Data}">        <ListBox.ItemsPanel>               <ItemsPanelTemplate>                     <StackPanel>                     </StackPanel>               </ItemsPanelTemplate>        </ListBox.ItemsPanel> </ListBox>


  1. <UserControl.Resources>
  2.        <Style TargetType=”ListBoxItem”>
  3.               <Setter Property=”Grid.Column” Value=”1″></Setter>
  4.        </Style>
  5. </UserControl.Resources>
  6. <ListBox  ItemsSource=”{Binding Data}”>
  7.        <ListBox.ItemsPanel>
  8.               <ItemsPanelTemplate>
  9.                     <Grid>
  10.                            <Grid.ColumnDefinitions>
  11.                                   <ColumnDefinition></ColumnDefinition>
  12.                                   <ColumnDefinition></ColumnDefinition>
  13.                            </Grid.ColumnDefinitions>
  14.                     </Grid>
  15.               </ItemsPanelTemplate>
  16.        </ListBox.ItemsPanel>
  17. </ListBox>


<ListBox  ItemsSource="{Binding Data}">        <ListBox.ItemsPanel>               <ItemsPanelTemplate>                     <VirtualizingStackPanel>                     </VirtualizingStackPanel>               </ItemsPanelTemplate>        </ListBox.ItemsPanel> </ListBox>

The results (in seconds) are shown in Figure 16.

Panel Type 100 Items 1000 Items 5000 Items
Canvas 0.725 4.7 28.95
StackPanel 0.7645 4.681333 29.597
Grid 0.737 4.8125 29.694
VirtualizingStackPanel 0.596 0.567667 0.5965

Figure 16 Execution Times of Panels in Figure 15

As you can see, UI virtualization can greatly increase the performance. The logic involved in handling a custom layout is insignificant when compared to the overhead of measuring down the entire visual tree.

General suggestions for custom panels

When the default panels don’t meet our performance or behavior needs, we often create our own custom panels. Here are some suggestions to consider when creating custom panels:

  • Use virtualization when the scenario allows because it can drastically reduce load/response time.
  • Use InvalidateMeasure wisely. It triggers a layout update to the element and all its children. It also triggers both Measure and Arrange cycles.
  • When implementing custom panels, be careful using layout-specific properties, such as ActualWidth, ActualHeight, Visibility and so on. They can cause a LayoutCycleException.
  • When displaying hierarchical UI elements, consider arranging the items in one container. Using nested UI Elements/Panels increases the measure cycle time because unmanaged code is called in each nested level.


As you have seen, taking a closer look at the code you’re using in your XAML applications can help you make some changes to your usual coding processes that can enhance performance throughout your applications. If you understand the complexity of the dependency property system, you can optimize your code for faster retrievals. If you know exactly how LINQ uses collections, you can make decisions that lead to faster and more efficient loops. Finally, if you’re aware of how the layout system operates and how to optimize custom controls when you need them, you can create more responsive XAML applications.

Georgi Atanasov and Tsvyatko Konov are senior software engineers at Telerik. You can reach Georgi at georgi.atanasov@telerik.com and Tsvyatko at Tsvyatko.Konov@telerik.com.

此条目发表在Best Practices, MVC, UT分类目录。将固定链接加入收藏夹。

3 Responses to Optimizing C# for XAML Platforms

  1. Pingback引用通告: Your Best Questions About Jquery Animate – Top Apprentice Blog

  2. Pingback引用通告: Your Best Questions About Jquery Animate | Top Apprentice Blog

  3. Pingback引用通告: Your Best Questions About Jquery | Top Apprentice Blog


Fill in your details below or click an icon to log in:

WordPress.com 徽标

You are commenting using your WordPress.com account. Log Out /  更改 )

Google+ photo

You are commenting using your Google+ account. Log Out /  更改 )

Twitter picture

You are commenting using your Twitter account. Log Out /  更改 )

Facebook photo

You are commenting using your Facebook account. Log Out /  更改 )


Connecting to %s