[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ Skip to main content
Log in

Secure and trusted partial grey-box verification

  • Regular Contribution
  • Published:
International Journal of Information Security Aims and scope Submit manuscript

Abstract

A crucial aspect in the development of software-intensive systems is verification. This is the process of checking whether the system has been implemented in compliance with its specification. In many situations, the manufacture of one or more components of the system is outsourced. We study the case of how a third party (the verifier) can verify an outsourced component effectively, without access to all the details of the internal design of that component built by the developer. We limit the design detail that is made available to the verifier to a diagram of interconnections between the different design units within the component, but encrypt the design details within the units and also the intermediate values passed between the design units. We formalize this notion of limited information using tabular expressions to describe the functions in both the specifications and the design. The most common form of verification is testing, and it is known that black-box testing of the component is not effective enough in deriving test cases that will adequately determine the correctness of the implementation, and the safety of its behaviour. We have developed protocols that allow for the derivation of test cases that take advantage of the design details disclosed as described above. We can regard this as partial grey-box testing that does not compromise the developer’s secret information. Our protocols work with both trusted and untrusted developers, as well as trusted and untrusted verifiers, and allow for the checking of the correctness of the verification process itself by any third party, and at any time. Currently our results are derived under the simplifying assumption that the software design units are linked acyclically. We leave the lifting of this assumption as an open problem for future research.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
£29.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (United Kingdom)

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14

Similar content being viewed by others

Notes

  1. In general, the two pairs (DR) and (\(D^{spec}, R^{spec}\)) may not be the same. Nevertheless, in this case there must be a predetermined mapping provided by the implementation designer, which converts (\(D_{spec}, R_{spec}\)) to (DR) and vice versa, since, otherwise, the specifications verification is meaningless as a process.

References

  1. Alspaugh, T.A., Faulk, S.R., Britton, K.H., Parker, R.A., Parnas, D.L.: Software requirements for the A-7E aircraft. Tech. Rep, DTIC Document (1992)

  2. Hayhurst, K.J., Veerhusen, D.S., Chilenski, J.J., Rierson, L.K.: A practical tutorial on modified condition/decision coverage. Tech. Rep. TM-2001-210876, NASA (2001)

  3. Gentry, C.: A fully homomorphic encryption scheme. Ph.D. thesis, Stanford University (2009)

  4. Naor, M.: Bit commitment using pseudorandomness. J. Cryptol. 4(2), 151 (1991)

    Article  MATH  Google Scholar 

  5. Collberg, C., Thomborson, C., Low, D.: A taxonomy of obfuscating transformations. Department of Computer Science, The University of Auckland, New Zealand, Tech. Rep. (1997)

  6. Wang, C., Hill, J., Knight, J., Davidson, J.: Software tamper resistance: obstructing static analysis of programs. Tech. Rep. CS-2000-12, University of Virginia (2000)

  7. Barak, B., Goldreich, O., Impagliazzo, R., Rudich, S., Sahai, A., Vadhan, S., Yang, K.: On the (im)possibility of obfuscating programs. In: CRYPTO 2001, pp. 1–18. Springer (2001)

  8. Chaki, S., Schallhart, C., Veith, H.: Verification across intellectual property boundaries. ACM Trans. Softw. Eng. Methodol. (TOSEM) 22, 15 (2013)

    Article  Google Scholar 

  9. Lynn, B., Prabhakaran, M., Sahai, A.: Positive results and techniques for obfuscation. In: EUROCRYPT 2004, pp. 20–39. Springer (2004)

  10. Yao, A.C.: Protocols for secure computations. In: Proceedings of the 54th IEEE Annual Symposium on Foundations of Computer Science (FOCS)

  11. Goldwasser, S., Kalai, Y.T., Rothblum, G.N.: One-time programs. In: CRYPTO 2008, pp. 39–56. Springer (2008)

  12. Goldwasser, S., Kalai, Y., Popa, R.A., Vaikuntanathan, V., Zeldovich, N.: Reusable garbled circuits and succinct functional encryption. In: Proceedings of the 45th ACM Symposium on Theory of Computing (STOC), pp. 555–564. ACM (2013)

  13. Cormode, G., Mitzenmacher, M., Thaler, J.: Practical verified computation with streaming interactive proofs. In: Proceedings of the 3rd Innovations in Theoretical Computer Science Conference, pp. 90–112. ACM (2012)

  14. Thaler, J., Roberts, M., Mitzenmacher, M., Pfister, H.: Verifiable computation with massively parallel interactive proofs. In: Proceedings of the 4th USENIX Conference on Hot Topics in Cloud Computing, HotCloud’12 (2012)

  15. Vu, V., Setty, S., Blumberg, A.J., Walfish, M.: A hybrid architecture for interactive verifiable computation. In: Proceeding of the 2013 IEEE Symposium on Security and Privacy, SP’13, pp. 223–237. IEEE (2013)

  16. Setty, S., McPherson, R., Blumberg, A.J., Walfish, M.: Making argument systems for outsourced computation practical (sometimes). In: Proceedings of the 19th Annual Network & Distributed System Security Symposium (2012)

  17. Setty, S., Vu, V., Panpalia, N., Braun, B., Blumberg, A.J., Walfish, M.: Taking proof-based verified computation a few steps closer to practicality. In: Proceedings of the 21st USENIX Conference on Security Symposium, Security’12, pp. 253–268 (2012)

  18. Parno, B., Howell, J., Gentry, C., Raykova, M.: Pinocchio: Nearly practical verifiable computation. In: Proceedings of the 2013 IEEE Symposium on Security and Privacy (SP), pp. 238–252. IEEE (2013)

  19. Ben-Sasson, E., Chiesa, A., Genkin, D., Tromer, E., Virza, M.: SNARKs for C: Verifying program executions succinctly and in zero knowledge. In: Proceedings of CRYPTO 2013, pp. 90–108. Springer (2013)

  20. Arora, S., Safra, S.: Probabilistic checking of proofs: a new characterization of NP. J. ACM (JACM) 45(1), 70 (1998)

    Article  MathSciNet  MATH  Google Scholar 

  21. Wassyng, A., Janicki, R.: Tabular expressions in software engineering. Fundam. Inf. 67, 343 (2005)

    MATH  Google Scholar 

  22. Malkhi, D., Nisan, N., Pinkas, B., Sella, Y.: Fairplay—a secure two-party computation system. In: Proceedings of the 13th Conference on USENIX Security Symposium SSYM’04, vol. 13, pp. 20–20 (2004)

  23. Huang, Y., Evans, D., Katz, J., Malka, L.: Faster secure two-party computation using garbled circuits. In: Proceedings of the 20th USENIX Conference on Security SEC’11, pp. 35–35 (2011)

  24. Katz, J., Lindell, Y.: Introduction to Modern Cryptography. CRC Press, Boca Raton (2014)

    MATH  Google Scholar 

  25. FIPS: Data Encryption Standard. NBS 46 (1977)

  26. Vaikuntanathan, V.: Computing blindfolded: new developments in fully homomorphic encryption. In: Proceedings of the 52nd IEEE Annual Symposium on Foundations of Computer Science (FOCS), pp. 5–16. IEEE (2011)

  27. Gentry, C., Halevi, S., Vaikuntanathan, V.: \(i\)-hop homomorphic encryption and rerandomizable Yao circuits. In: Proceedings of CRYPTO 2010, pp. 155–172. Springer (2010)

  28. Valiant, L.G.: Universal circuits (preliminary report). In: Proceedings of the 8th Annual ACM Symposium on Theory of Computing (STOC), pp. 196–203. ACM (1976)

  29. Sander, T., Young, A., Yung, M.: Non-interactive cryptocomputing for NC1. In: Proceedings of the 40th Annual Symposium on Foundations of Computer Science (FOCS), pp. 554–566. IEEE (1999)

  30. Acar, A., Aksu, H., Uluagac, A.S.: C. M.: A survey on homomorphic encryption schemes: theory and implementation (2017). arxiv: 1704.03578

  31. Zhou, B., Pei, J.: Preserving privacy in social networks against neighborhood attacks. In: Proceedings of the 24th IEEE International Conference on Data Engineering, ICDE’08, pp. 506–515. IEEE (2008)

  32. Cheng, J., Fu, A.W., Liu, J.: K-isomorphism: privacy preserving network publication against structural attacks. In: Proceedings of the 2010 ACM SIGMOD International Conference on Management of data, pp. 459–470. ACM (2010)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to George Karakostas.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

An example for Sect. 4

An example for Sect. 4

In this subsection, we use the example in Fig. 15 to show how to apply our verification scheme to actually verify a specific table graph. In Fig. 15, there is an initial table graph that is to be verified by the verifier V.

Fig. 15
figure 15

An initial table graph \(\mathcal {G}\) of an implementation

First the developer transforms this initial table graph into a table graph G introduced in Sect. 2.2 (see Fig. 1c). Then

$$\begin{aligned} F_1(a)&= \left\{ \begin{array}{ll} (\top ,a-20), &{}\quad \text {if } a>45 \\ (\bot ,\bot ), &{}\quad \mathrm{otherwise} \end{array} \right. \\ F_2(a)&= \left\{ \begin{array}{ll} (\top ,a-5), &{}\quad \text {if } 35<=a<=45 \\ (\bot ,\bot ), &{} \quad \mathrm{otherwise}\end{array} \right. \\ F_3(a)&= \left\{ \begin{array}{lll} (\top ,a), &{} \quad \text {if } 25<=a<35 \\ (\bot ,\bot ), &{} \quad \mathrm{otherwise} \end{array} \right. \\ F_4(a)&= \left\{ \begin{array}{lll} (\top ,20), &{}\quad \text {if } a<25 \\ (\bot ,\bot ), &{} \quad \mathrm{otherwise} \end{array} \right. \\ F_5(z)&= \left\{ \begin{array}{lll} (\top ,True), &{}\quad \text {if } z>30 \\ (\bot ,\bot ), &{} \quad \mathrm{otherwise} \end{array} \right. \\ F_6(z)&= \left\{ \begin{array}{lll} (\top ,False), &{}\quad \text {if } z<=30 \\ (\bot ,\bot ), &{} \quad \mathrm{otherwise} \end{array} \right. \\ F_7(b)&= \left\{ \begin{array}{lll} (\top ,2), &{}\quad \text {if } b=True \\ (\bot ,\bot ), &{} \quad \mathrm{otherwise} \end{array} \right. \\ F_8(b)&= \left\{ \begin{array}{lll} (\top ,3), &{}\quad \text {if } b=False \\ (\bot ,\bot ), &{} \quad \mathrm{otherwise} \end{array} \right. \end{aligned}$$

The developer applies our content-secure verification scheme VS to G. It runs VS.Encrypt as follows. \((G',hpk,U) \leftarrow VS.Encrypt(1^K,G)\). Figure 16 is the encrypted table graph \(G'\).

Fig. 16
figure 16

An encrypted table graph \(G'\) after applying VS to G

According to our protocol, the verifier V receives \(G'\) and does the verification on \(G'\). We show how V can do MC/DC verification [2] on \(G'\). MC/DC performs structural coverage analysis. First it gets test cases generated from analysing a given program’s requirements. Then it checks whether these test cases actually cover the given program’s structure and finds out the part of the program’s structure which is not covered. First we assume the VGA that V uses will do MC/DC verification after it generates the test cases. Suppose V runs VGA to generate the test cases, based on requirements-based tests (by analysing \(G^{spec}\)), and these test cases are stored in EI. Then V picks an external input X to \(G'\) from EI and starts evaluating \(G'\) with X.

For \(X=(a=46,b=True)\), V sends the following queries to the developer DL (the queries are in the format of the input of VS.Encode): \(Q_1=(1,(\top ,46),null,q_1)\), \(Q_2=(2,(\top ,46),null,q_1)\), \(Q_3=(3,(\top ,46),null,q_1)\), \(Q_4=(4,(\top ,46),null,q_1)\), \(Q_5=(7,(\top ,True),null,q_1)\), \(Q_6=(8,(\top ,True),null,q_1)\), because it needs to evaluate \(PT'_1\), \(PT'_2\), \(PT'_3\), \(PT'_4\), \(PT'_7\), \(PT'_8\) as well as an encoding for the external inputs of each table. We take the evaluation of the path (Input \(\rightarrow PT'_1 \rightarrow PT'_6 \rightarrow \) Output) as an example. For query \(Q_1\), DL evaluates \(VS.Encode(Q_1)\) and returns FHE.Enc(hpk, 46)(FHE.Enc(hpk, 46) is the output of \(VS.Encode(Q_1)\)), which is the input \(x^1\) to \(PT'_1\). Then V runs \(FHE.Eval(hpk,U,x^1,E_{C_1})\) and outputs \(PT'_1(x^1)\). After this V sends \((1,x^1,PT'_1(x^1),q_2)\) to DL. Because we know that for \(PT_1 \in G\), if 46 is the input to \(PT_1\), then the output will be \((\top ,26)\). Thus for the query \((1,x^1,PT'_1(x^1),q_2)\), DL evaluates \(VS.Encode(1,x^1,PT'_1(x^1),q_2)\) and returns \(\top \). Hence, V knows that for a=46 as an external input, the lhs predicate (a decision and condition) of \(PT_1 \in G\) is satisfied, and the rhs function of \(PT_1 \in G\) is covered.

After finishing evaluating \(PT'_1\), V starts evaluating \(PT'_5\) and \(PT'_6\) with \(PT'_1(x^1)\) as their input. \(x^6=PT'_1(x^1)\) is \(PT'_6\)’s input. After finishing evaluating \(PT'_6\), V gets \(PT'_6(x^6)\) as the output. Then V sends \((6,x^6,PT'_6(x^6,q_2)\) to DL. DL evaluates \(VS.Encode(6,x^6,PT'_6(x^6,q_2)\) and VS.Encode’s output is True (Because \((26<30)\), \(PT_5\)’s output is \((\top ,False)\). We also know that the output of \(PT'_6\) is an external output. Accordingly, VS.Encode outputs False). Therefore, DL returns False to V. Then V knows that \(y1=False\) as well as the fact that the lhs predicate (a decision and condition) of \(PT_6 \in G\) is satisfied and the rhs function of \(PT_6 \in G\) is covered.

After evaluating \(G'\) with \(X=(a=46,b=True)\) by similar steps as described above and getting the external output \(Y=(\bot ,False,\bot ,2,\bot )\), V knows that for X, the lhs predicates of \(PT_1\), \(PT_6\) and \(PT_7\) are satisfied while the rest tables’ lhs predicates are not satisfied. Hence, V knows that the predicates of \(PT_1\), \(PT_6\) and \(PT_7\) are True while the predicates in the rest tables of G are False and the statements (the rhs functions of the tables in G) of \(PT_1\), \(PT_6\) and \(PT_7\) are covered. Moreover, V compares Y with \(G^{spec}(X)\) to see if G behaves as expected with X as an external input.

V will keep evaluating \(G'\) with the rest external inputs in EI, and by interacting with DL in the way as described above, it does the structural coverage analysis of the requirements-based test cases. He will be able to know whether the external inputs in EI covers every predicates in G. Additionally, it will be able to know whether G behaves as expected in the requirements specification described by \(G^{spec}\).

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Cai, Y., Karakostas, G. & Wassyng, A. Secure and trusted partial grey-box verification. Int. J. Inf. Secur. 18, 677–700 (2019). https://doi.org/10.1007/s10207-019-00431-4

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10207-019-00431-4

Keywords

Navigation